Numerical relativity is one of the branches of general relativity that uses numerical methods and algorithms to solve and analyze problems. To this end, supercomputers are often employed to study black holes, gravitational waves, neutron stars and many other phenomena described by Albert Einstein's theory of general relativity. A currently active field of research in numerical relativity is the simulation of relativistic binaries and their associated gravitational waves.
A primary goal of numerical relativity is to study spacetimes whose exact form is not known. The spacetimes so found computationally can either be fully dynamical, stationary or static and may contain matter fields or vacuum. In the case of stationary and static solutions, numerical methods may also be used to study the stability of the equilibrium spacetimes. In the case of dynamical spacetimes, the problem may be divided into the initial value problem and the evolution, each requiring different methods.
Numerical relativity is applied to many areas, such as cosmological models, critical phenomena, perturbed black holes and neutron stars, and the coalescence of black holes and neutron stars, for example. In any of these cases, Einstein's equations can be formulated in several ways that allow us to evolve the dynamics. While Cauchy methods have received a majority of the attention, characteristic and Regge calculus based methods have also been used. All of these methods begin with a snapshot of the gravitational fields on some hypersurface, the initial data, and evolve these data to neighboring hypersurfaces. [1]
Like all problems in numerical analysis, careful attention is paid to the stability and convergence of the numerical solutions. In this line, much attention is paid to the gauge conditions, coordinates, and various formulations of the Einstein equations and the effect they have on the ability to produce accurate numerical solutions.
Numerical relativity research is distinct from work on classical field theories as many techniques implemented in these areas are inapplicable in relativity. Many facets are however shared with large scale problems in other computational sciences like computational fluid dynamics, electromagnetics, and solid mechanics. Numerical relativists often work with applied mathematicians and draw insight from numerical analysis, scientific computation, partial differential equations, and geometry among other mathematical areas of specialization.
Albert Einstein published his theory of general relativity in 1915. [2] It, like his earlier theory of special relativity, described space and time as a unified spacetime subject to what are now known as the Einstein field equations. These form a set of coupled nonlinear partial differential equations (PDEs). After more than 100 years since the first publication of the theory, relatively few closed-form solutions are known for the field equations, and, of those, most are cosmological solutions that assume special symmetry to reduce the complexity of the equations.
The field of numerical relativity emerged from the desire to construct and study more general solutions to the field equations by approximately solving the Einstein equations numerically. A necessary precursor to such attempts was a decomposition of spacetime back into separated space and time. This was first published by Richard Arnowitt, Stanley Deser, and Charles W. Misner in the late 1950s in what has become known as the ADM formalism. [3] Although for technical reasons the precise equations formulated in the original ADM paper are rarely used in numerical simulations, most practical approaches to numerical relativity use a "3+1 decomposition" of spacetime into three-dimensional space and one-dimensional time that is closely related to the ADM formulation, because the ADM procedure reformulates the Einstein field equations into a constrained initial value problem that can be addressed using computational methodologies.
At the time that ADM published their original paper, computer technology would not have supported numerical solution to their equations on any problem of any substantial size. The first documented attempt to solve the Einstein field equations numerically appears to be by S. G. Hahn and R. W. Lindquist in 1964, [4] followed soon thereafter by Larry Smarr [5] [6] and by K. R. Eppley. [7] These early attempts were focused on evolving Misner data in axisymmetry (also known as "2+1 dimensions"). At around the same time Tsvi Piran wrote the first code that evolved a system with gravitational radiation using a cylindrical symmetry. [8] In this calculation Piran has set the foundation for many of the concepts used today in evolving ADM equations, like "free evolution" versus "constrained evolution",[ clarification needed ] which deal with the fundamental problem of treating the constraint equations that arise in the ADM formalism. Applying symmetry reduced the computational and memory requirements associated with the problem, allowing the researchers to obtain results on the supercomputers available at the time.
The first realistic calculations of rotating collapse were carried out in the early eighties by Richard Stark and Tsvi Piran [9] in which the gravitational wave forms resulting from formation of a rotating black hole were calculated for the first time. For nearly 20 years following the initial results, there were fairly few other published results in numerical relativity, probably due to the lack of sufficiently powerful computers to address the problem. In the late 1990s, the Binary Black Hole Grand Challenge Alliance successfully simulated a head-on binary black hole collision. As a post-processing step the group computed the event horizon for the spacetime. This result still required imposing and exploiting axisymmetry in the calculations. [10]
Some of the first documented attempts to solve the Einstein equations in three dimensions were focused on a single Schwarzschild black hole, which is described by a static and spherically symmetric solution to the Einstein field equations. This provides an excellent test case in numerical relativity because it does have a closed-form solution so that numerical results can be compared to an exact solution, because it is static, and because it contains one of the most numerically challenging features of relativity theory, a physical singularity. One of the earliest groups to attempt to simulate this solution was Peter Anninos et al. in 1995. [11] In their paper they point out that
In the years that followed, not only did computers become more powerful, but also various research groups developed alternate techniques to improve the efficiency of the calculations. With respect to black hole simulations specifically, two techniques were devised to avoid problems associated with the existence of physical singularities in the solutions to the equations: (1) Excision, and (2) the "puncture" method. In addition the Lazarus group developed techniques for using early results from a short-lived simulation solving the nonlinear ADM equations, in order to provide initial data for a more stable code based on linearized equations derived from perturbation theory. More generally, adaptive mesh refinement techniques, already used in computational fluid dynamics were introduced to the field of numerical relativity.
In the excision technique, which was first proposed in the late 1990s, [12] a portion of a spacetime inside of the event horizon surrounding the singularity of a black hole is simply not evolved. In theory this should not affect the solution to the equations outside of the event horizon because of the principle of causality and properties of the event horizon (i.e. nothing physical inside the black hole can influence any of the physics outside the horizon). Thus if one simply does not solve the equations inside the horizon one should still be able to obtain valid solutions outside. One "excises" the interior by imposing ingoing boundary conditions on a boundary surrounding the singularity but inside the horizon. While the implementation of excision has been very successful, the technique has two minor problems. The first is that one has to be careful about the coordinate conditions. While physical effects cannot propagate from inside to outside, coordinate effects could. For example, if the coordinate conditions were elliptical, coordinate changes inside could instantly propagate out through the horizon. This then means that one needs hyperbolic type coordinate conditions with characteristic velocities less than that of light for the propagation of coordinate effects (e.g., using harmonic coordinates coordinate conditions). The second problem is that as the black holes move, one must continually adjust the location of the excision region to move with the black hole.
The excision technique was developed over several years including the development of new gauge conditions that increased stability and work that demonstrated the ability of the excision regions to move through the computational grid. [13] [14] [15] [16] [17] [18] The first stable, long-term evolution of the orbit and merger of two black holes using this technique was published in 2005. [19]
In the puncture method the solution is factored into an analytical part, [20] which contains the singularity of the black hole, and a numerically constructed part, which is then singularity free. This is a generalization of the Brill-Lindquist [21] prescription for initial data of black holes at rest and can be generalized to the Bowen-York [22] prescription for spinning and moving black hole initial data. Until 2005, all published usage of the puncture method required that the coordinate position of all punctures remain fixed during the course of the simulation. Of course black holes in proximity to each other will tend to move under the force of gravity, so the fact that the coordinate position of the puncture remained fixed meant that the coordinate systems themselves became "stretched" or "twisted," and this typically led to numerical instabilities at some stage of the simulation.
In 2005, a group of researchers demonstrated for the first time the ability to allow punctures to move through the coordinate system, thus eliminating some of the earlier problems with the method. This allowed accurate long-term evolutions of black holes. [19] [23] [24] By choosing appropriate coordinate conditions and making crude analytic assumption about the fields near the singularity (since no physical effects can propagate out of the black hole, the crudeness of the approximations does not matter), numerical solutions could be obtained to the problem of two black holes orbiting each other, as well as accurate computation of gravitational radiation (ripples in spacetime) emitted by them. 2005 was renamed the "annus mirabilis" of numerical relativity, 100 years after the annus mirabilis papers of special relativity (1905).
The Lazarus project (1998–2005) was developed as a post-Grand Challenge technique to extract astrophysical results from short lived full numerical simulations of binary black holes. It combined approximation techniques before (post-Newtonian trajectories) and after (perturbations of single black holes) with full numerical simulations attempting to solve Einstein's field equations. [25] All previous attempts to numerically integrate in supercomputers the Hilbert-Einstein equations describing the gravitational field around binary black holes led to software failure before a single orbit was completed.
The Lazarus project approach, in the meantime, gave the best insight into the binary black hole problem and produced numerous and relatively accurate results, such as the radiated energy and angular momentum emitted in the latest merging state, [26] [27] the linear momentum radiated by unequal mass holes, [28] and the final mass and spin of the remnant black hole. [29] The method also computed detailed gravitational waves emitted by the merger process and predicted that the collision of black holes is the most energetic single event in the Universe, releasing more energy in a fraction of a second in the form of gravitational radiation than an entire galaxy in its lifetime.
Adaptive mesh refinement (AMR) as a numerical method has roots that go well beyond its first application in the field of numerical relativity. Mesh refinement first appears in the numerical relativity literature in the 1980s, through the work of Choptuik in his studies of critical collapse of scalar fields. [30] [31] The original work was in one dimension, but it was subsequently extended to two dimensions. [32] In two dimensions, AMR has also been applied to the study of inhomogeneous cosmologies, [33] [34] and to the study of Schwarzschild black holes. [35] The technique has now become a standard tool in numerical relativity and has been used to study the merger of black holes and other compact objects in addition to the propagation of gravitational radiation generated by such astronomical events. [36] [37]
In the past few years[ when? ], hundreds of research papers have been published leading to a wide spectrum of mathematical relativity, gravitational wave, and astrophysical results for the orbiting black hole problem. This technique extended to astrophysical binary systems involving neutron stars and black holes, [38] and multiple black holes. [39] One of the most surprising predictions is that the merger of two black holes can give the remnant hole a speed of up to 4000 km/s that can allow it to escape from any known galaxy. [40] [41] The simulations also predict an enormous release of gravitational energy in this merger process, amounting up to 8% of its total rest mass. [42]
General relativity, also known as the general theory of relativity, and as Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time or four-dimensional spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of second-order partial differential equations.
In theories of quantum gravity, the graviton is the hypothetical quantum of gravity, an elementary particle that mediates the force of gravitational interaction. There is no complete quantum field theory of gravitons due to an outstanding mathematical problem with renormalization in general relativity. In string theory, believed by some to be a consistent theory of quantum gravity, the graviton is a massless state of a fundamental string.
In general relativity, a naked singularity is a hypothetical gravitational singularity without an event horizon.
Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics. It deals with environments in which neither gravitational nor quantum effects can be ignored, such as in the vicinity of black holes or similar compact astrophysical objects, as well as in the early stages of the universe moments after the Big Bang.
A wormhole is a hypothetical structure connecting disparate points in spacetime, and is based on a special solution of the Einstein field equations.
The following is a timeline of gravitational physics and general relativity.
The no-hair theorem states that all stationary black hole solutions of the Einstein–Maxwell equations of gravitation and electromagnetism in general relativity can be completely characterized by only three independent externally observable classical parameters: mass, angular momentum, and electric charge. Other characteristics are uniquely determined by these three parameters, and all other information about the matter that formed a black hole or is falling into it "disappears" behind the black-hole event horizon and is therefore permanently inaccessible to external observers after the black hole "settles down". Physicist John Archibald Wheeler expressed this idea with the phrase "black holes have no hair", which was the origin of the name.
Jorge Pullin is an Argentine-American theoretical physicist known for his work on black hole collisions and quantum gravity. He is the Horace Hearne Chair in theoretical Physics at the Louisiana State University.
In theoretical physics, geometrodynamics is an attempt to describe spacetime and associated phenomena completely in terms of geometry. Technically, its goal is to unify the fundamental forces and reformulate general relativity as a configuration space of three-metrics, modulo three-dimensional diffeomorphisms. The origin of this idea can be found in an English mathematician William Kingdon Clifford's works. This theory was enthusiastically promoted by John Wheeler in the 1960s, and work on it continues in the 21st century.
Induced gravity is an idea in quantum gravity that spacetime curvature and its dynamics emerge as a mean field approximation of underlying microscopic degrees of freedom, similar to the fluid mechanics approximation of Bose–Einstein condensates. The concept was originally proposed by Andrei Sakharov in 1967.
Brandon Carter, is an Australian theoretical physicist who explores the properties of black holes, and was the first to name and employ the anthropic principle in its contemporary form. He is a researcher at the Meudon campus of the Laboratoire Univers et Théories, part of the French CNRS.
A ring singularity or ringularity is the gravitational singularity of a rotating black hole, or a Kerr black hole, that is shaped like a ring.
In metric theories of gravitation, particularly general relativity, a static spherically symmetric perfect fluid solution is a spacetime equipped with suitable tensor fields which models a static round ball of a fluid with isotropic pressure.
Gravitational-wave astronomy is a subfield of astronomy concerned with the detection and study of gravitational waves emitted by astrophysical sources.
A nonsingular black hole model is a mathematical theory of black holes that avoids certain theoretical problems with the standard black hole model, including information loss and the unobservable nature of the black hole event horizon.
Alessandra Buonanno is an Italian-American theoretical physicist and director at the Max Planck Institute for Gravitational Physics in Potsdam. She is the head of the "Astrophysical and Cosmological Relativity" department. She holds a research professorship at the University of Maryland, College Park, and honorary professorships at the Humboldt University in Berlin, and the University of Potsdam. She is a leading member of the LIGO Scientific Collaboration, which observed gravitational waves from a binary black-hole merger in 2015.
A binary black hole (BBH), or black hole binary, is a system consisting of two black holes in close orbit around each other. Like black holes themselves, binary black holes are often divided into binary stellar black holes, formed either as remnants of high-mass binary star systems or by dynamic processes and mutual capture; and binary supermassive black holes, believed to be a result of galactic mergers.
The first direct observation of gravitational waves was made on 14 September 2015 and was announced by the LIGO and Virgo collaborations on 11 February 2016. Previously, gravitational waves had been inferred only indirectly, via their effect on the timing of pulsars in binary star systems. The waveform, detected by both LIGO observatories, matched the predictions of general relativity for a gravitational wave emanating from the inward spiral and merger of a pair of black holes of around 36 and 29 solar masses and the subsequent "ringdown" of the single resulting black hole. The signal was named GW150914. It was also the first observation of a binary black hole merger, demonstrating both the existence of binary stellar-mass black hole systems and the fact that such mergers could occur within the current age of the universe.
Carlos O. Lousto is a Distinguished Professor in the School of Mathematical Sciences in Rochester Institute of Technology, known for his work on black hole collisions.
Manuela Campanelli is a distinguished professor of astrophysics of the Rochester Institute of Technology. She also holds the John Vouros endowed professorship at RIT and is the director of its Center for Computational Relativity and Gravitation. Her work focuses on the astrophysics of merging black holes and neutron stars, which are powerful sources of gravitational waves, electromagnetic radiation and relativistic jets. This research is central to the fields of relativistic astrophysics and gravitational-wave astronomy.