Quantum capacity

Last updated

In the theory of quantum communication, the quantum capacity is the highest rate at which quantum information can be communicated over many independent uses of a noisy quantum channel from a sender to a receiver. It is also equal to the highest rate at which entanglement can be generated over the channel, and forward classical communication cannot improve it. The quantum capacity theorem is important for the theory of quantum error correction, and more broadly for the theory of quantum computation. The theorem giving a lower bound on the quantum capacity of any channel is colloquially known as the LSD theorem, after the authors Lloyd, [1] Shor, [2] and Devetak [3] who proved it with increasing standards of rigor.

Contents

Hashing bound for Pauli channels

The LSD theorem states that the coherent information of a quantum channel is an achievable rate for reliable quantum communication. For a Pauli channel, the coherent information has a simple form[ citation needed ] and the proof that it is achievable is particularly simple as well. We[ who? ] prove the theorem for this special case by exploiting random stabilizer codes and correcting only the likely errors that the channel produces.

Theorem (hashing bound). There exists a stabilizer quantum error-correcting code that achieves the hashing limit for a Pauli channel of the following form:

where and is the entropy of this probability vector.

Proof. Consider correcting only the typical errors. That is, consider defining the typical set of errors as follows:

where is some sequence consisting of the letters and is the probability that an IID Pauli channel issues some tensor-product error . This typical set consists of the likely errors in the sense that

for all and sufficiently large . The error-correcting conditions [4] for a stabilizer code in this case are that is a correctable set of errors if

for all error pairs and such that where is the normalizer of . Also, we consider the expectation of the error probability under a random choice of a stabilizer code.

Proceed as follows:

The first equality follows by definition— is an indicator function equal to one if is uncorrectable under and equal to zero otherwise. The first inequality follows, since we correct only the typical errors because the atypical error set has negligible probability mass. The second equality follows by exchanging the expectation and the sum. The third equality follows because the expectation of an indicator function is the probability that the event it selects occurs. Continuing, we have

The first equality follows from the error-correcting conditions for a quantum stabilizer code, where is the normalizer of . The first inequality follows by ignoring any potential degeneracy in the code—we consider an error uncorrectable if it lies in the normalizer and the probability can only be larger because . The second equality follows by realizing that the probabilities for the existence criterion and the union of events are equivalent. The second inequality follows by applying the union bound. The third inequality follows from the fact that the probability for a fixed operator not equal to the identity commuting with the stabilizer operators of a random stabilizer can be upper bounded as follows:

The reasoning here is that the random choice of a stabilizer code is equivalent to fixing operators , ..., and performing a uniformly random Clifford unitary. The probability that a fixed operator commutes with , ..., is then just the number of non-identity operators in the normalizer () divided by the total number of non-identity operators (). After applying the above bound, we then exploit the following typicality bounds:

We conclude that as long as the rate , the expectation of the error probability becomes arbitrarily small, so that there exists at least one choice of a stabilizer code with the same bound on the error probability.

See also

Related Research Articles

In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern general form, this fundamental result in probability theory was precisely stated as late as 1920, thereby serving as a bridge between classical and modern probability theory.

Probability density function Function whose integral over a region describes the probability of an event occurring in that region

In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample in the sample space can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample.

In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem.

Path integral formulation Formulation of quantum mechanics

The path integral formulation is a description in quantum mechanics that generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.

In probability theory, the Chernoff bound, named after Herman Chernoff but due to Herman Rubin, gives exponentially decreasing bounds on tail distributions of sums of independent random variables. It is a sharper bound than the known first- or second-moment-based tail bounds such as Markov's inequality or Chebyshev's inequality, which only yield power-law bounds on tail decay. However, the Chernoff bound requires that the variates be independent – a condition that neither Markov's inequality nor Chebyshev's inequality require, although Chebyshev's inequality does require the variates to be pairwise independent.

In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.

The theory of quantum error correction plays a prominent role in the practical realization and engineering of quantum computing and quantum communication devices. The first quantum error-correcting codes are strikingly similar to classical block codes in their operation and performance. Quantum error-correcting codes restore a noisy, decohered quantum state to a pure quantum state. A stabilizer quantum error-correcting code appends ancilla qubits to qubits that we want to protect. A unitary encoding circuit rotates the global state into a subspace of a larger Hilbert space. This highly entangled, encoded state corrects for local noisy errors. A quantum error-correcting code makes quantum computation and quantum communication practical by providing a way for a sender and receiver to simulate a noiseless qubit channel given a noisy qubit channel whose noise conforms to a particular error model.

Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two.

Amplitude amplification is a technique in quantum computing which generalizes the idea behind the Grover's search algorithm, and gives rise to a family of quantum algorithms. It was discovered by Gilles Brassard and Peter Høyer in 1997, and independently rediscovered by Lov Grover in 1998.

Entanglement distillation is the transformation of N copies of an arbitrary entangled state into some number of approximately pure Bell pairs, using only local operations and classical communication (LOCC).

In probability theory, a Markov kernel is a map that in the general theory of Markov processes, plays the role that the transition matrix does in the theory of Markov processes with a finite state space.

In mathematics, a line integral is an integral where the function to be integrated is evaluated along a curve. The terms path integral, curve integral, and curvilinear integral are also used; contour integral is used as well, although that is typically reserved for line integrals in the complex plane.

The Brownian motion models for financial markets are based on the work of Robert C. Merton and Paul A. Samuelson, as extensions to the one-period market models of Harold Markowitz and William F. Sharpe, and are concerned with defining the concepts of financial assets and markets, portfolios, gains and wealth in terms of continuous-time stochastic processes.

Quantum block codes are useful in quantum computing and in quantum communications. The encoding circuit for a large block code typically has a high complexity although those for modern codes do have lower complexity.

In quantum computing, the quantum phase estimation algorithm, is a quantum algorithm to estimate the phase of an eigenvector of a unitary operator. More precisely, given a unitary matrix and a quantum state such that , the algorithm estimates the value of with high probability within additive error , using qubits and controlled-U operations. The algorithm was initially introduced by Alexei Kitaev in 1995.

For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:

In quantum information theory, the classical capacity of a quantum channel is the maximum rate at which classical data can be sent over it error-free in the limit of many uses of the channel. Holevo, Schumacher, and Westmoreland proved the following least upper bound on the classical capacity of any quantum channel :

Occam learning

In computational learning theory, Occam learning is a model of algorithmic learning where the objective of the learner is to output a succinct representation of received training data. This is closely related to probably approximately correct (PAC) learning, where the learner is evaluated on its predictive power of a test set.

In computational and mathematical biology, a biological lattice-gas cellular automaton (BIO-LGCA) is a discrete model for moving and interacting biological agents, a type of cellular automaton. The BIO-LGCA is based on the lattice-gas cellular automaton (LGCA) model used in fluid dynamics. A BIO-LGCA model describes cells and other motile biological agents as point particles moving on a discrete lattice, thereby interacting with nearby particles. Contrary to classic cellular automaton models, particles in BIO-LGCA are defined by their position and velocity. This allows to model and analyze active fluids and collective migration mediated primarily through changes in momentum, rather than density. BIO-LGCA applications include cancer invasion and cancer progression.

References

  1. Seth Lloyd (1997). "Capacity of the noisy quantum channel". Physical Review A. 55 (3): 1613–1622. arXiv: quant-ph/9604015 . Bibcode:1997PhRvA..55.1613L. doi:10.1103/PhysRevA.55.1613. S2CID   5555850.
  2. Peter Shor (2002). "The quantum channel capacity and coherent information" (PDF). Lecture Notes, MSRI Workshop on Quantum Computation.
  3. Igor Devetak (2005). "The private classical capacity and quantum capacity of a quantum channel". IEEE Transactions on Information Theory. 51: 44–55. arXiv: quant-ph/0304127 . doi:10.1109/TIT.2004.839515. S2CID   12246393.
  4. Nielsen, Michael A.; Chuang, Isaac L. (2000), Quantum Computation and Quantum Information, Cambridge University Press, ISBN   9780521635035 .