Andrew MacGregor Childs | |
---|---|
Nationality | American |
Alma mater | California Institute of Technology Massachusetts Institute of Technology |
Known for | Quantum computing, quantum algorithms, quantum walk |
Scientific career | |
Fields | Computer science, physics |
Institutions | University of Maryland University of Waterloo |
Doctoral advisor | Edward Farhi |
Website | www |
Andrew MacGregor Childs is an American computer scientist and physicist known for his work on quantum computing. He is currently a professor in the department of computer science and Institute for Advanced Computer Studies at the University of Maryland. He also co-directs the Joint Center for Quantum Information and Computer Science, a partnership between the University of Maryland and the National Institute of Standards and Technology. [1]
Andrew Childs received a doctorate in physics from MIT in 2004, advised by Edward Farhi. [2] His thesis was on Quantum Information Processing in Continuous Time. [3] After completing his Ph.D., Childs was a DuBridge Postdoctoral Scholar at the Institute for Quantum Information at the California Institute of Technology from 2004 to 2007. [4] From 2007 to 2014, he was a faculty member in the Department of Combinatorics and Optimization and the Institute for Quantum Computing at the University of Waterloo. Childs joined the University of Maryland in 2014. He is also a senior fellow of the Canadian Institute for Advanced Research. [5]
Childs is known for his work on quantum computing, especially on the development of quantum algorithms. [6] [7] [8] He helped to develop the concept of a quantum walk [9] [10] [11] [12] leading to an example of exponential quantum speedup and algorithms for spatial search, [13] formula evaluation, and universal computation. [14] [15] He also developed quantum algorithms for algebraic problems and for simulating quantum systems.
In quantum computing, a quantum algorithm is an algorithm which runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is usually used for those algorithms which seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
Seth Lloyd is a professor of mechanical engineering and physics at the Massachusetts Institute of Technology.
In quantum computing, the Gottesman–Knill theorem is a theoretical result by Daniel Gottesman and Emanuel Knill that states that stabilizer circuits, circuits that only consist of gates from the normalizer of the qubit Pauli group, also called Clifford group, can be perfectly simulated in polynomial time on a probabilistic classical computer. The Clifford group can be generated solely by using CNOT, Hadamard, and phase gate S; and therefore stabilizer circuits can be constructed using only these gates.
Quantum annealing (QA) is an optimization process for finding the global minimum of a given objective function over a given set of candidate solutions, by a process using quantum fluctuations. Quantum annealing is used mainly for problems where the search space is discrete with many local minima; such as finding the ground state of a spin glass or the traveling salesman problem. The term "quantum annealing" was first proposed in 1988 by B. Apolloni, N. Cesa Bianchi and D. De Falco as a quantum-inspired classical algorithm. It was formulated in its present form by T. Kadowaki and H. Nishimori in 1998 though an imaginary-time variant without quantum coherence had been discussed by A. B. Finnila, M. A. Gomez, C. Sebenik and J. D. Doll in 1994.
Adiabatic quantum computation (AQC) is a form of quantum computing which relies on the adiabatic theorem to perform calculations and is closely related to quantum annealing.
Daniel Amihud Lidar is the holder of the Viterbi Professorship of Engineering at the University of Southern California, where he is a professor of electrical engineering, chemistry, physics and astronomy. He is the director and co-founder of the USC Center for Quantum Information Science & Technology (CQIST) as well as scientific director of the USC-Lockheed Martin Quantum Computing Center, notable for his research on control of quantum systems and quantum information processing.
Richard Erwin Cleve is a Canadian professor of computer science at the David R. Cheriton School of Computer Science at the University of Waterloo, where he holds the Institute for Quantum Computing Chair in quantum computing, and an associate member of the Perimeter Institute for Theoretical Physics.
Dynamical decoupling (DD) is an open-loop quantum control technique employed in quantum computing to suppress decoherence by taking advantage of rapid, time-dependent control modulation. In its simplest form, DD is implemented by periodic sequences of instantaneous control pulses, whose net effect is to approximately average the unwanted system-environment coupling to zero. Different schemes exist for designing DD protocols that use realistic bounded-strength control pulses, as well as for achieving high-order error suppression, and for making DD compatible with quantum gates. In spin systems in particular, commonly used protocols for dynamical decoupling include the Carr-Purcell and the Carr-Purcell-Meiboom-Gill schemes. They are based on the Hahn spin echo technique of applying periodic pulses to enable refocusing and hence extend the coherence times of qubits.
Quantum machine learning is the integration of quantum algorithms within machine learning programs.
Edward Henry Farhi is a physicist working on quantum computation as a principal scientist at Google. In 2018 he retired from his position as the Cecil and Ida Green Professor of Physics at the Massachusetts Institute of Technology. He was the director of the Center for Theoretical Physics at MIT from 2004 until 2016. He made contributions to particle physics, general relativity and astroparticle physics before turning to his current interest, quantum computation.
Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.
Quantum image processing (QIMP) is using quantum computing or quantum information processing to create and work with quantum images.
In quantum computing, quantum supremacy or quantum advantage is the goal of demonstrating that a programmable quantum computer can solve a problem that no classical computer can solve in any feasible amount of time, irrespective of the usefulness of the problem. The term was coined by John Preskill in 2012, but the concept dates to Yuri Manin's 1980 and Richard Feynman's 1981 proposals of quantum computing.
Aram Wettroth Harrow is a professor of physics in the Massachusetts Institute of Technology's Center for Theoretical Physics.
Continuous-variable (CV) quantum information is the area of quantum information science that makes use of physical observables, like the strength of an electromagnetic field, whose numerical values belong to continuous intervals. One primary application is quantum computing. In a sense, continuous-variable quantum computation is "analog", while quantum computation using qubits is "digital." In more technical terms, the former makes use of Hilbert spaces that are infinite-dimensional, while the Hilbert spaces for systems comprising collections of qubits are finite-dimensional. One motivation for studying continuous-variable quantum computation is to understand what resources are necessary to make quantum computers more powerful than classical ones.
In quantum computing, a qubit is a unit of information analogous to a bit in classical computing, but it is affected by quantum mechanical properties such as superposition and entanglement which allow qubits to be in some ways more powerful than classical bits for some tasks. Qubits are used in quantum circuits and quantum algorithms composed of quantum logic gates to solve computational problems, where they are used for input/output and intermediate computations.
Randomized benchmarking is an experimental method for measuring the average error rates of quantum computing hardware platforms. The protocol estimates the average error rates by implementing long sequences of randomly sampled quantum gate operations. Randomized benchmarking is the industry-standard protocol used by quantum hardware developers such as IBM and Google to test the performance of the quantum operations.
The Eastin–Knill theorem is a no-go theorem that states: "No quantum error correcting code can have a continuous symmetry which acts transversely on physical qubits". In other words, no quantum error correcting code can transversely implement a universal gate set, where a transversal logical gate is one that can be implemented on a logical qubit by the independent action of separate physical gates on corresponding physical qubits.
This glossary of quantum computing is a list of definitions of terms and concepts used in quantum computing, its sub-disciplines, and related fields.