Wave function collapse

Last updated

In quantum mechanics, wave function collapse occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an "observation". It is the essence of a measurement in quantum mechanics which connects the wave function with classical observables like position and momentum. Collapse is one of two processes by which quantum systems evolve in time; the other is the continuous evolution via the Schrödinger equation. [1] Collapse is a black box for a thermodynamically irreversible interaction with a classical environment. [2] [3] Calculations of quantum decoherence predict apparent wave function collapse when a superposition forms between the quantum system's states and the environment's states. Significantly, the combined wave function of the system and environment continue to obey the Schrödinger equation. [4]

Quantum mechanics Branch of physics that acts as an abstract framework formulating all the laws of nature

Quantum mechanics, including quantum field theory, is a fundamental theory in physics which describes nature at the smallest scales of energy levels of atoms and subatomic particles.

Wave function mathematical description of the quantum state of a system; complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it

A wave function in quantum physics is a mathematical description of the quantum state of an isolated quantum system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. The most common symbols for a wave function are the Greek letters ψ or Ψ.

Quantum superposition Principle of quantum mechanics

Quantum superposition is a fundamental principle of quantum mechanics. It states that, much like waves in classical physics, any two quantum states can be added together ("superposed") and the result will be another valid quantum state; and conversely, that every quantum state can be represented as a sum of two or more other distinct states. Mathematically, it refers to a property of solutions to the Schrödinger equation; since the Schrödinger equation is linear, any linear combination of solutions will also be a solution.

Contents

In 1927, Werner Heisenberg used the idea of wave function reduction to explain quantum measurement. [5] However, if collapse were a fundamental physical phenomenon, rather than just the epiphenomenon of some other process, it would mean nature was fundamentally stochastic, i.e. nondeterministic, an undesirable property for a theory. [2] [6] [7] This issue remained until quantum decoherence entered mainstream opinion after its reformulation in the 1980s. [2] [4] [8] Decoherence explains the perception of wave function collapse in terms of interacting large- and small-scale quantum systems. [9]

Werner Heisenberg German theoretical physicist

Werner Karl Heisenberg was a German theoretical physicist and one of the key pioneers of quantum mechanics. He published his work in 1925 in a breakthrough paper. In the subsequent series of papers with Max Born and Pascual Jordan, during the same year, this matrix formulation of quantum mechanics was substantially elaborated. He is known for the Heisenberg uncertainty principle, which he published in 1927. Heisenberg was awarded the 1932 Nobel Prize in Physics "for the creation of quantum mechanics".

An epiphenomenon is a secondary phenomenon that occurs alongside or in parallel to a primary phenomenon. The word has two senses: one that connotes known causation and one that connotes absence of causation or reservation of judgment about it.

Stochastic refers to a randomly determined process. The word first appeared in English to describe a mathematical object called a stochastic process, but now in mathematics the terms stochastic process and random process are considered interchangeable. The word, with its current definition meaning random, came from German, but it originally came from Greek στόχος (stókhos), meaning 'aim, guess'.

Mathematical description

Before collapse, the wave function may be any square-integrable function. This function is expressible as a linear combination of the eigenstates of any observable. Observables represent classical dynamical variables, and when one is measured by a classical observer, the wave function is projected onto a random eigenstate of that observable. The observer simultaneously measures the classical value of that observable to be the eigenvalue of the final state. [10]

In physics, an observable is a physical quantity that can be measured. Examples include position and momentum. In systems governed by classical mechanics, it is a real-valued function on the set of all possible system states. In quantum physics, it is an operator, or gauge, where the property of the system state can be determined by some sequence of physical operations. For example, these operations might involve submitting the system to various electromagnetic fields and eventually reading a value.

Classical mechanics sub-field of mechanics, which is concerned with the set of physical laws describing the motion of bodies under the action of a system of forces

Classical mechanics describes the motion of macroscopic objects, from projectiles to parts of machinery, and astronomical objects, such as spacecraft, planets, stars and galaxies.

Vector projection

The vector projection of a vector a on a nonzero vector b is the orthogonal projection of a onto a straight line parallel to b. It is a vector parallel to b, defined as

Mathematical background

The quantum state of a physical system is described by a wave function (in turn – an element of a projective Hilbert space). This can be expressed as a vector using Dirac or bra–ket notation  :

In quantum physics, a quantum state is the state of an isolated quantum system. A quantum state provides a probability distribution for the value of each observable, i.e. for the outcome of each possible measurement on the system. Knowledge of the quantum state together with the rules for the system's evolution in time exhausts all that can be predicted about the system's behavior.

Projective space Completion of the usual space with "points at infinity"

In mathematics, the concept of a projective space originated from the visual effect of perspective, where parallel lines seems to meet at infinity. A projective space may thus be viewed as the extension of a Euclidean space, or, more generally, an affine space with points at infinity, in such a way that there is one point at infinity of each direction of parallel lines.

Hilbert space inner product space that is metrically complete; a Banach space whose norm induces an inner product (follows the parallelogram identity)

The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured. Furthermore, Hilbert spaces are complete: there are enough limits in the space to allow the techniques of calculus to be used.

The kets , specify the different quantum "alternatives" available - a particular quantum state. They form an orthonormal eigenvector basis, formally

Basis (linear algebra) subset of a vector space, such that every vector is uniquely expressible as a linear combination over this set of vectors

In mathematics, a set B of elements (vectors) in a vector space V is called a basis, if every element of V may be written in a unique way as a (finite) linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates on B of the vector. The elements of a basis are called basis vectors.

Where represents the Kronecker delta.

In mathematics, the Kronecker delta is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise:

An observable (i.e. measurable parameter of the system) is associated with each eigenbasis, with each quantum alternative having a specific value or eigenvalue, ei, of the observable. A "measurable parameter of the system" could be the usual position r and the momentum p of (say) a particle, but also its energy E, z components of spin (sz), orbital (Lz) and total angular (Jz) momenta etc. In the basis representation these are respectively .

The coefficients c1, c2, c3... are the probability amplitudes corresponding to each basis . These are complex numbers. The moduli square of ci, that is |ci|2 = ci*ci (* denotes complex conjugate), is the probability of measuring the system to be in the state .

For simplicity in the following, all wave functions are assumed to be normalized; the total probability of measuring all possible states is one:

The process

With these definitions it is easy to describe the process of collapse. For any observable, the wave function is initially some linear combination of the eigenbasis of that observable. When an external agency (an observer, experimenter) measures the observable associated with the eigenbasis , the wave function collapses from the full to just one of the basis eigenstates, , that is:

The probability of collapsing to a given eigenstate is the Born probability, . Post-measurement, other elements of the wave function vector, , have "collapsed" to zero, and .

More generally, collapse is defined for an operator with eigenbasis . If the system is in state , and is measured, the probability of collapsing the system to state (and measuring ) would be . Note that this is not the probability that the particle is in state ; it is in state until cast to an eigenstate of .

However, we never observe collapse to a single eigenstate of a continuous-spectrum operator (e.g. position, momentum, or a scattering Hamiltonian), because such eigenfunctions are non-normalizable. In these cases, the wave function will partially collapse to a linear combination of "close" eigenstates (necessarily involving a spread in eigenvalues) that embodies the imprecision of the measurement apparatus. The more precise the measurement, the tighter the range. Calculation of probability proceeds identically, except with an integral over the expansion coefficient . [11] This phenomenon is unrelated to the uncertainty principle, although increasingly precise measurements of one operator (e.g. position) will naturally homogenize the expansion coefficient of wave function with respect to another, incompatible operator (e.g. momentum), lowering the probability of measuring any particular value of the latter.

The determination of preferred-basis

The complete set of orthogonal functions which a wave function will collapse to is also called preferred-basis. [2] There lacks theoretical foundation for the preferred-basis to be the eigenstates of observables such as position, momentum, etc. In fact the eigenstates of position are not even physical due to the infinite energy associated with them. A better approach is to derive the preferred-basis from basic principles. It is proved that only special dynamic equation can collapse the wave function. [12] By applying one axiom of the quantum mechanics and the assumption that preferred-basis depends on the total Hamiltonian, a unique set of equations is obtained from the collapse equation which determines the preferred-basis for general situations. Depending on the system Hamiltonian and wave function, the determination equations may yield preferred-basis as energy eigenfunctions, quasi-position eigenfunctions, or a mixed energy and quasi-position eigenfunctions; such as an energy eigenfunctions for the interior of a macroscopic object and the quasi-position eigenfunctions for the particles on the surface.

Quantum decoherence

Wave function collapse is not fundamental from the perspective of quantum decoherence. [13] There are several equivalent approaches to deriving collapse, like the density matrix approach, but each has the same effect: decoherence irreversibly converts the "averaged" or "environmentally traced over" density matrix from a pure state to a reduced mixture, giving the appearance of wave function collapse.

History and context

The concept of wavefunction collapse was introduced by Werner Heisenberg in his 1927 paper on the uncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", and incorporated into the mathematical formulation of quantum mechanics by John von Neumann, in his 1932 treatise Mathematische Grundlagen der Quantenmechanik. [14] Heisenberg did not try to specify exactly what the collapse of the wavefunction meant. He, however, emphasized that it should not be understood as a physical process. [15] Niels Bohr also repeatedly cautioned that we must give up a “pictorial representation.” The founders of the Copenhagen Interpretation preferred to stress the mathematical formalism of what was occurring.

Consistent with Heisenberg, von Neumann postulated that there were two processes of wave function change:

  1. The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement, as outlined above.
  2. The deterministic, unitary, continuous time evolution of an isolated system that obeys the Schrödinger equation (or a relativistic equivalent, i.e. the Dirac equation).

In general, quantum systems exist in superpositions of those basis states that most closely correspond to classical descriptions, and, in the absence of measurement, evolve according to the Schrödinger equation. However, when a measurement is made, the wave function collapses—from an observer's perspective—to just one of the basis states, and the property being measured uniquely acquires the eigenvalue of that particular state, . After the collapse, the system again evolves according to the Schrödinger equation.

By explicitly dealing with the interaction of object and measuring instrument, von Neumann [1] has attempted to create consistency of the two processes of wave function change.

He was able to prove the possibility of a quantum mechanical measurement scheme consistent with wave function collapse. However, he did not prove the necessity of such a collapse. Although von Neumann's projection postulate is often presented as a normative description of quantum measurement, it was conceived by taking into account experimental evidence available during the 1930s (in particular the Compton-Simon experiment was paradigmatic), but many important present-day measurement procedures do not satisfy it (so-called measurements of the second kind). [16] [17] [18]

The existence of the wave function collapse is required in

On the other hand, the collapse is considered a redundant or optional approximation in

The cluster of phenomena described by the expression wave function collapse is a fundamental problem in the interpretation of quantum mechanics, and is known as the measurement problem. The problem is deflected by the Copenhagen Interpretation, which postulates that this is a special characteristic of the "measurement" process. Everett's many-worlds interpretation deals with it by discarding the collapse-process, thus reformulating the relation between measurement apparatus and system in such a way that the linear laws of quantum mechanics are universally valid; that is, the only process according to which a quantum system evolves is governed by the Schrödinger equation or some relativistic equivalent.

Originating from de Broglie–Bohm theory, but no longer tied to it, is the physical process of decoherence, which causes an apparent collapse. Decoherence is also important for the consistent histories interpretation. A general description of the evolution of quantum mechanical systems is possible by using density operators and quantum operations. In this formalism (which is closely related to the C*-algebraic formalism) the collapse of the wave function corresponds to a non-unitary quantum operation.

The significance ascribed to the wave function varies from interpretation to interpretation, and varies even within an interpretation (such as the Copenhagen Interpretation). If the wave function merely encodes an observer's knowledge of the universe then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent.

See also

Related Research Articles

In quantum mechanics, bra–ket notation is a standard notation for describing quantum states. It can also be used to denote abstract vectors and linear functionals in mathematics. The notation uses angle brackets and a vertical bar, to denote the scalar product of vectors or the action of a linear functional on a vector in a complex vector space. The scalar product or action is written as

The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. Such are distinguished from mathematical formalisms for theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces and operators on these spaces. Many of these structures are drawn from functional analysis, a research area within pure mathematics that was influenced in part by the needs of quantum mechanics. In brief, values of physical observables such as energy and momentum were no longer considered as values of functions on phase space, but as eigenvalues; more precisely as spectral values of linear operators in Hilbert space.

Dirac equation Relativistic quantum mechanical wave equation

In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine details of the hydrogen spectrum in a completely rigorous way.

Schrödinger equation Linear partial differential equation whose solution describes the quantum-mechanical system.

The Schrödinger equation is a linear partial differential equation that describes the wave function or state function of a quantum-mechanical system. It is a key result in quantum mechanics, and its discovery was a significant landmark in the development of the subject. The equation is named after Erwin Schrödinger, who postulated the equation in 1925, and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.

A density matrix is a matrix that describes the statistical state of a system in quantum mechanics. The probability for any outcome of any well-defined measurement upon a system can be calculated from the density matrix for that system. The extreme points in the set of density matrices are the pure states, which can also be written as state vectors or wavefunctions. Density matrices that are not pure states are mixed states. Any mixed state can be represented as a convex combination of pure states, and so density matrices are helpful for dealing with statistical ensembles of different possible preparations of a quantum system, or situations where a precise preparation is not known, as in quantum statistical mechanics.

Quantum decoherence loss of quantum coherence

Quantum decoherence is the loss of quantum coherence. In quantum mechanics, particles such as electrons are described by a wave function, a mathematical representation of the quantum state of a system; a probabilistic interpretation of the wave function is used to explain various quantum effects. As long as there exists a definite phase relation between different states, the system is said to be coherent. Coherence is preserved under the laws of quantum physics, and this is necessary for the functioning of quantum computers. If a quantum system is perfectly isolated, it would be impossible to manipulate or investigate it. If it is not perfectly isolated, for example during a measurement, coherence is shared with the environment and appears to be lost with time, a process called quantum decoherence. As a result of this process, quantum behavior is apparently lost, just as energy appears to be lost by friction in classical mechanics.

In physics, an operator is a function over a space of physical states to another space of physical states. The simplest example of the utility of operators is the study of symmetry. Because of this, they are very useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.

Probability amplitude complex number whose squared absolute value is a probability

In quantum mechanics, a probability amplitude is a complex number used in describing the behaviour of systems. The modulus squared of this quantity represents a probability or probability density.

The framework of quantum mechanics requires a careful definition of measurement. The issue of measurement lies at the heart of the problem of the interpretation of quantum mechanics, for which there is currently no consensus. The question of how the operational process measurement affects the ontological state of the observed system is unresolved, and called the measurement problem.

The adiabatic theorem is a concept in quantum mechanics. Its original form, due to Max Born and Vladimir Fock (1928), was stated as follows:

In mathematical physics, some approaches to quantum field theory are more popular than others. For historical reasons, the Schrödinger representation is less favoured than Fock space methods. In the early days of quantum field theory, maintaining symmetries such as Lorentz invariance, displaying them manifestly, and proving renormalisation were of paramount importance. The Schrödinger representation is not manifestly Lorentz invariant and its renormalisability was only shown as recently as the 1980s by Kurt Symanzik (1981).

The Lippmann–Schwinger equation is one of the most used equations to describe particle collisions – or, more precisely, scattering – in quantum mechanics. It may be used in scattering of molecules, atoms, neutrons, photons or any other particles and is important mainly in atomic, molecular, and optical physics, nuclear physics and particle physics, but also for seismic scattering problems in geophysics. It relates the scattered wave function with the interaction that produces the scattering and therefore allows calculation of the relevant experimental parameters.

The Born rule, formulated by German physicist Max Born in 1926, is a physical law of quantum mechanics giving the probability that a measurement on a quantum system will yield a given result. In its simplest form it states that the probability density of finding the particle at a given point is proportional to the square of the magnitude of the particle's wavefunction at that point. The Born rule is one of the key principles of quantum mechanics.

In quantum mechanics, the position operator is the operator that corresponds to the position observable of a particle.

The theoretical and experimental justification for the Schrödinger equation motivates the discovery of the Schrödinger equation, the equation that describes the dynamics of nonrelativistic particles. The motivation uses photons, which are relativistic particles with dynamics described by Maxwell's equations, as an analogue for all types of particles.

In quantum mechanics, the expectation value is the probabilistic expected value of the result (measurement) of an experiment. It can be thought of as an average of all the possible outcomes of a measurement as weighted by their likelihood, and as such it is not the most probable value of a measurement; indeed the expectation value may have zero probability of occurring. It is a fundamental concept in all areas of quantum physics.

This is a glossary for the terminology often encountered in undergraduate quantum mechanics courses.

References

  1. 1 2 J. von Neumann (1932). Mathematische Grundlagen der Quantenmechanik (in German). Berlin: Springer.
    J. von Neumann (1955). Mathematical Foundations of Quantum Mechanics. Princeton University Press.
  2. 1 2 3 4 Schlosshauer, Maximilian (2005). "Decoherence, the measurement problem, and interpretations of quantum mechanics". Rev. Mod. Phys. 76 (4): 1267–1305. arXiv: quant-ph/0312059 . Bibcode:2004RvMP...76.1267S. doi:10.1103/RevModPhys.76.1267 . Retrieved 28 February 2013.
  3. Giacosa, Francesco (2014). "On unitary evolution and collapse in quantum mechanics". Quanta. 3 (1): 156–170. arXiv: 1406.2344 . doi:10.12743/quanta.v3i1.26.
  4. 1 2 Zurek, Wojciech Hubert (2009). "Quantum Darwinism". Nature Physics. 5 (3): 181–188. arXiv: 0903.5082 . Bibcode:2009NatPh...5..181Z. doi:10.1038/nphys1202 . Retrieved 28 February 2013.
  5. Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Z. Phys.43: 172–198. Translation as 'The actual content of quantum theoretical kinematics and mechanics' here
  6. L. Bombelli. "Wave-Function Collapse in Quantum Mechanics". Topics in Theoretical Physics. Retrieved 2010-10-13.
  7. G. Jaeger (2017). ""Wave-Packet Reduction" and the Quantum Character of the Actualization of Potentia". Entropy. 19: 13. doi:10.3390/e19100513.
  8. M. Pusey; J. Barrett; T. Rudolph (2012). "On the reality of the quantum state". Nature Physics. 8 (6): 476–479. arXiv: 1111.3328 . Bibcode:2012NatPh...8..476P. doi:10.1038/nphys2309.
  9. C. Cohen-Tannoudji (2006) [1973]. Quantum Mechanics (2 volumes). New York: Wiley. p. 22.
  10. Griffiths, David J. (2005). Introduction to Quantum Mechanics, 2e. Upper Saddle River, New Jersey: Pearson Prentice Hall. pp. 106–109. ISBN   0131118927.
  11. Griffiths, David J. (2005). Introduction to Quantum Mechanics, 2e. Upper Saddle River, New Jersey: Pearson Prentice Hall. pp. 100–105. ISBN   0131118927.
  12. S. Mei (2013). "on the origin of preferred-basis and evolution pattern of wave function". arXiv: 1311.4405 .
  13. Wojciech H. Zurek, Decoherence, einselection, and the quantum origins of the classical,Reviews of Modern Physics 2003, 75, 715 or https://arxiv.org/abs/quant-ph/0105127
  14. C. Kiefer (2002). "On the interpretation of quantum theory – from Copenhagen to the present day". arXiv: quant-ph/0210152 .
  15. G. Jaeger (2017). ""Wave-Packet Reduction" and the Quantum Character of the Actualization of Potentia". Entropy. 19: 13. doi:10.3390/e19100513.
  16. W. Pauli (1958). "Die allgemeinen Prinzipien der Wellenmechanik". In S. Flügge (ed.). Handbuch der Physik (in German). V. Berlin: Springer-Verlag. p. 73.
  17. L. Landau & R. Peierls (1931). "Erweiterung des Unbestimmtheitsprinzips für die relativistische Quantentheorie". Zeitschrift für Physik (in German). 69 (1–2): 56–69. Bibcode:1931ZPhy...69...56L. doi:10.1007/BF01391513.)
  18. Discussions of measurements of the second kind can be found in most treatments on the foundations of quantum mechanics, for instance, J. M. Jauch (1968). Foundations of Quantum Mechanics. Addison-Wesley. p. 165.; B. d'Espagnat (1976). Conceptual Foundations of Quantum Mechanics. W. A. Benjamin. pp. 18, 159.; and W. M. de Muynck (2002). Foundations of Quantum Mechanics: An Empiricist Approach. Kluwer Academic Publishers. section 3.2.4..