Noise-based logic (NBL) [1] [2] [3] [4] [5] [6] [7] [8] is a class of multivalued deterministic logic schemes, developed in the twenty-first century, where the logic values and bits are represented by different realizations of a stochastic process. The concept of noise-based logic and its name was created by Laszlo B. Kish. In its foundation paper [3] it is noted that the idea was inspired by the stochasticity of brain signals and by the unconventional noise-based communication schemes, such as the Kish cypher.
The logic values are represented by multi-dimensional "vectors" (orthogonal functions) and their superposition, where the orthogonal basis vectors are independent noises. By the proper combination (products or set-theoretical products) of basis-noises, which are called noise-bit, a logic hyperspace can be constructed with D(N) = 2N number of dimensions, where N is the number of noise-bits. Thus N noise-bits in a single wire correspond to a system of 2N classical bits that can express 22N different logic values. Independent realizations of a stochastic process of zero mean have zero cross-correlation with each other and with other stochastic processes of zero mean. Thus the basis noise vectors are orthogonal not only to each other but they and all the noise-based logic states (superpositions) are orthogonal also to any background noises in the hardware. Therefore, the noise-based logic concept is robust against background noises, which is a property that can potentially offer a high energy-efficiency.
In the paper, [3] where noise-based logic was first introduced, generic stochastic-processes with zero mean were proposed and a system of orthogonal sinusoidal signals were also proposed as a deterministic-signal version of the logic system. The mathematical analysis about statistical errors and signal energy was limited to the cases of Gaussian noises and superpositions as logic signals in the basic logic space and their products and superpositions of their products in the logic hyperspace (see also. [4] In the subsequent brain logic scheme, [5] the logic signals were (similarly to neural signals) unipolar spike sequences generated by a Poisson process, and set-theoretical unifications (superpositions) and intersections (products) of different spike sequences. Later, in the instantaneous noise-based logic schemes [6] [7] and computation works, [8] random telegraph waves (periodic time, bipolar, with fixed absolute value of amplitude) were also utilized as one of the simplest stochastic processes available for NBL. With choosing unit amplitude and symmetric probabilities, the resulting random-telegraph wave has 0.5 probability to be in the +1 or in the -1 state which is held over the whole clock period.
Noise-based logic gates can be classified according to the method the input identifies the logic value at the input. The first gates [3] [4] analyzed the statistical correlations between the input signal and the reference noises. The advantage of these is the robustness against background noise. The disadvantage is the slow speed and higher hardware complexity. The instantaneous logic gates [5] [6] [7] are fast, they have low complexity but they are not robust against background noises. With either neural spike type signals or with bipolar random-telegraph waves of unity absolute amplitude, and randomness only in the sign of the amplitude offer very simple instantaneous logic gates. Then linear or analog devices unnecessary and the scheme can operate in the digital domain. However, whenever instantaneous logic must be interfaced with classical logic schemes, the interface must use correlator-based logic gates for an error-free signal. [6]
All the noise-based logic schemes listed above have been proven universal. [3] [6] [7] The papers typically produce the NOT and the AND gates to prove universality, because having both of them is a satisfactory condition for the universality of a Boolean logic.
The string verification work [8] over a slow communication channel shows a powerful computing application where the methods is inherently based on calculating the hash function. The scheme is based on random telegraph waves and it is mentioned in the paper [8] that the authors intuitively conclude that the intelligence of the brain is using similar operations to make a reasonably good decision based on a limited amount of information. The superposition of the first D(N) = 2N integer numbers can be produced with only 2N operations, which the authors call "Achilles ankle operation" in the paper. [4]
Preliminary schemes have already been published [8] to utilize noise-based logic in practical computers. However, it is obvious from these papers that this young field has yet a long way to go before it will be seen in everyday applications.
Code-division multiple access (CDMA) is a channel access method used by various radio communication technologies. CDMA is an example of multiple access, where several transmitters can send information simultaneously over a single communication channel. This allows several users to share a band of frequencies. To permit this without undue interference between the users, CDMA employs spread spectrum technology and a special coding scheme.
In electronics and telecommunications, modulation is the process of varying one or more properties of a periodic waveform, called the carrier signal, with a separate signal called the modulation signal that typically contains information to be transmitted. For example, the modulation signal might be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal representing a sequence of binary digits, a bitstream from a computer.
A quantum computer is a computer that exploits quantum mechanical phenomena. On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior using specialized hardware. Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. In particular, a large-scale quantum computer could break widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is largely experimental and impractical, with several obstacles to useful applications.
In quantum computing, a qubit or quantum bit is a basic unit of quantum information—the quantum version of the classic binary bit physically realized with a two-state device. A qubit is a two-state quantum-mechanical system, one of the simplest quantum systems displaying the peculiarity of quantum mechanics. Examples include the spin of the electron in which the two levels can be taken as spin up and spin down; or the polarization of a single photon in which the two spin states can also be measured as horizontal and vertical linear polarization. In a classical system, a bit would have to be in one state or the other. However, quantum mechanics allows the qubit to be in a coherent superposition of multiple states simultaneously, a property that is fundamental to quantum mechanics and quantum computing.
Shor's algorithm is a quantum algorithm for finding the prime factors of an integer. It was developed in 1994 by the American mathematician Peter Shor. It is one of the few known quantum algorithms with compelling potential applications and strong evidence of superpolynomial speedup compared to best known classical (non-quantum) algorithms. On the other hand, factoring numbers of practical significance requires far more qubits than available in the near future. Another concern is that noise in quantum circuits may undermine results, requiring additional qubits for quantum error correction.
Quantum superposition is a fundamental principle of quantum mechanics that states that linear combinations of solutions to the Schrödinger equation are also solutions of the Schrödinger equation. This follows from the fact that the Schrödinger equation is a linear differential equation in time and position. More precisely, the state of a system is given by a linear combination of all the eigenfunctions of the Schrödinger equation governing that system.
In computing, a hardware random number generator (HRNG), true random number generator (TRNG), non-deterministic random bit generator (NRBG), or physical random number generator is a device that generates random numbers from a physical process capable of producing entropy, unlike the pseudorandom number generator that utilizes a deterministic algorithm and non-physical nondeterministic random bit generators that do not include hardware dedicated to generation of entropy.
Landauer's principle is a physical principle pertaining to the lower theoretical limit of energy consumption of computation. It holds that an irreversible change in information stored in a computer, such as merging two computational paths, dissipates a minimum amount of heat to its surroundings.
Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization. Stochastic optimization methods generalize deterministic methods for deterministic problems.
Objective-collapse theories, also known spontaneous collapse models or dynamical reduction models, are proposed solutions to the measurement problem in quantum mechanics. As with other interpretations of quantum mechanics, they are possible explanations of why and how quantum measurements always give definite outcomes, not a superposition of them as predicted by the Schrödinger equation, and more generally how the classical world emerges from quantum theory. The fundamental idea is that the unitary evolution of the wave function describing the state of a quantum system is approximate. It works well for microscopic systems, but progressively loses its validity when the mass / complexity of the system increases.
A Boolean network consists of a discrete set of boolean variables each of which has a Boolean function assigned to it which takes inputs from a subset of those variables and output that determines the state of the variable it is assigned to. This set of functions in effect determines a topology (connectivity) on the set of variables, which then become nodes in a network. Usually, the dynamics of the system is taken as a discrete time series where the state of the entire network at time t+1 is determined by evaluating each variable's function on the state of the network at time t. This may be done synchronously or asynchronously.
The Hilbert–Huang transform (HHT) is a way to decompose a signal into so-called intrinsic mode functions (IMF) along with a trend, and obtain instantaneous frequency data. It is designed to work well for data that is nonstationary and nonlinear. In contrast to other common transforms like the Fourier transform, the HHT is an algorithm that can be applied to a data set, rather than a theoretical tool.
Laszlo Bela Kish is a physicist and professor of Electrical and Computer Engineering at Texas A&M University. His activities include a wide range of issues surrounding the physics and technical applications of stochastic fluctuations (noises) in physical, biological and technological systems, including nanotechnology. His earlier long-term positions include the Department of Experimental Physics, University of Szeged, Hungary, and Angstrom Laboratory, Uppsala University, Sweden (1997–2001). During the same periods he had also conducted scientific research in short-term positions, such as at the Eindhoven University of Technology, University of Cologne, National Research Laboratory of Metrology, University of Birmingham, and others.
Stochastic computing is a collection of techniques that represent continuous values by streams of random bits. Complex computations can then be computed by simple bit-wise operations on the streams. Stochastic computing is distinct from the study of randomized algorithms.
A digital signal is a signal that represents data as a sequence of discrete values; at any given time it can only take on, at most, one of a finite number of values. This contrasts with an analog signal, which represents continuous values; at any given time it represents a real number within a continuous range of values.
Sensing of phage-triggered ion cascades (SEPTIC) is a prompt bacterium identification method based on fluctuation-enhanced sensing in fluid medium. The advantages of SEPTIC are the specificity and speed offered by the characteristics of phage infection, the sensitivity due to fluctuation-enhanced sensing, and durability originating from the robustness of phages. An idealistic SEPTIC device may be as small as a pen and maybe able to identify a library of different bacteria within a few minutes measurement window.
Fluctuation-enhanced sensing (FES) is a specific type of chemical or biological sensing where the stochastic component, noise, of the sensor signal is analyzed. The stages following the sensor in a FES system typically contain filters and preamplifier(s) to extract and amplify the stochastic signal components, which are usually microscopic temporal fluctuations that are orders of magnitude weaker than the sensor signal. Then selected statistical properties of the amplified noise are analyzed, and a corresponding pattern is generated as the stochastic fingerprint of the sensed agent. Often the power density spectrum of the stochastic signal is used as output pattern however FES has been proven effective with more advanced methods, too, such as higher-order statistics.
In mathematics and telecommunications, stochastic geometry models of wireless networks refer to mathematical models based on stochastic geometry that are designed to represent aspects of wireless networks. The related research consists of analyzing these models with the aim of better understanding wireless communication networks in order to predict and control various network performance metrics. The models require using techniques from stochastic geometry and related fields including point processes, spatial statistics, geometric probability, percolation theory, as well as methods from more general mathematical disciplines such as geometry, probability theory, stochastic processes, queueing theory, information theory, and Fourier analysis.
Quantum machine learning is the integration of quantum algorithms within machine learning programs.
This glossary of quantum computing is a list of definitions of terms and concepts used in quantum computing, its sub-disciplines, and related fields.