A satellite galaxy is a smaller companion galaxy that travels on bound orbits within the gravitational potential of a more massive and luminous host galaxy (also known as the primary galaxy). [1] Satellite galaxies and their constituents are bound to their host galaxy, in the same way that planets within the Solar System are gravitationally bound to the Sun. [2] While most satellite galaxies are dwarf galaxies, satellite galaxies of large galaxy clusters can be much more massive. [3] The Milky Way is orbited by about fifty satellite galaxies, the largest of which is the Large Magellanic Cloud.
Moreover, satellite galaxies are not the only astronomical objects that are gravitationally bound to larger host galaxies (see globular clusters). For this reason, astronomers have defined galaxies as gravitationally bound collections of stars that exhibit properties that cannot be explained by a combination of baryonic matter (i.e. ordinary matter) and Newton's laws of gravity. [4] For example, measurements of the orbital speed of stars and gas within spiral galaxies result in a velocity curve that deviates significantly from the theoretical prediction. This observation has motivated various explanations such as the theory of dark matter and modifications to Newtonian dynamics. [1] Therefore, despite also being satellites of host galaxies, globular clusters should not be mistaken for satellite galaxies. Satellite galaxies are not only more extended and diffuse compared to globular clusters, but are also enshrouded in massive dark matter halos that are thought to have been endowed to them during the formation process. [5]
Satellite galaxies generally lead tumultuous lives due to their chaotic interactions with both the larger host galaxy and other satellites. For example, the host galaxy is capable of disrupting the orbiting satellites via tidal and ram pressure stripping. These environmental effects can remove large amounts of cold gas from satellites (i.e. the fuel for star formation), and this can result in satellites becoming quiescent in the sense that they have ceased to form stars. [6] Moreover, satellites can also collide with their host galaxy resulting in a minor merger (i.e. merger event between galaxies of significantly different masses). On the other hand, satellites can also merge with one another resulting in a major merger (i.e. merger event between galaxies of comparable masses). Galaxies are mostly composed of empty space, interstellar gas and dust, and therefore galaxy mergers do not necessarily involve collisions between objects from one galaxy and objects from the other, however, these events generally result in much more massive galaxies. Consequently, astronomers seek to constrain the rate at which both minor and major mergers occur to better understand the formation of gigantic structures of gravitationally bound conglomerations of galaxies such as galactic groups and clusters. [7] [8]
Prior to the 20th century, the notion that galaxies existed beyond the Milky Way was not well established. In fact, the idea was so controversial at the time that it led to what is now heralded as the "Shapley-Curtis Great Debate" aptly named after the astronomers Harlow Shapley and Heber Doust Curtis that debated the nature of "nebulae" and the size of the Milky Way at the National Academy of Sciences on April 26, 1920. Shapley argued that the Milky Way was the entire universe (spanning over 100,000 lightyears or 30 kiloparsec across) and that all of the observed "nebulae" (currently known as galaxies) resided within this region. On the other hand, Curtis argued that the Milky Way was much smaller and that the observed nebulae were in fact galaxies similar to the Milky Way. [9] This debate was not settled until late 1923 when the astronomer Edwin Hubble measured the distance to M31 (currently known as the Andromeda galaxy) using Cepheid Variable stars. By measuring the period of these stars, Hubble was able to estimate their intrinsic luminosity and upon combining this with their measured apparent magnitude he estimated a distance of 300 kpc, which was an order-of-magnitude larger than the estimated size of the universe made by Shapley. This measurement verified that not only was the universe much larger than previously expected, but it also demonstrated that the observed nebulae were actually distant galaxies with a wide range of morphologies (see Hubble sequence). [9]
Despite Hubble's discovery that the universe was teeming with galaxies, a majority of the satellite galaxies of the Milky Way and the Local Group remained undetected until the advent of modern astronomical surveys such as the Sloan Digital Sky Survey (SDSS) and the Dark Energy Survey (DES). [10] [11] In particular, the Milky Way is currently known to host 59 satellite galaxies (see satellite galaxies of the Milky Way), however two of these satellites known as the Large Magellanic Cloud and Small Magellanic Cloud have been observable in the Southern Hemisphere with the unaided eye since ancient times. Nevertheless, modern cosmological theories of galaxy formation and evolution predict a much larger number of satellite galaxies than what is observed (see missing satellites problem). [12] [13] However, more recent high resolution simulations have demonstrated that the current number of observed satellites pose no threat to the prevalent theory of galaxy formation. [14] [15]
Spectroscopic, photometric and kinematic observations of satellite galaxies have yielded a wealth of information that has been used to study, among other things, the formation and evolution of galaxies, the environmental effects that enhance and diminish the rate of star formation within galaxies and the distribution of dark matter within the dark matter halo. As a result, satellite galaxies serve as a testing ground for prediction made by cosmological models. [14] [16] [17]
As mentioned above, satellite galaxies are generally categorized as dwarf galaxies and therefore follow a similar Hubble classification scheme as their host with the minor addition of a lowercase "d" in front of the various standard types to designate the dwarf galaxy status. These types include dwarf irregular (dI), dwarf spheroidal (dSph), dwarf elliptical (dE) and dwarf spiral (dS). However, out of all of these types it is believed that dwarf spirals are not satellites, but rather dwarf galaxies that are only found in the field. [18]
Dwarf irregular satellite galaxies are characterized by their chaotic and asymmetric appearance, low gas fractions, high star formation rate and low metallicity. [19] Three of the closest dwarf irregular satellites of the Milky Way include the Small Magellanic Cloud, Canis Major Dwarf, and the newly discovered Antlia 2.
Dwarf elliptical satellite galaxies are characterized by their oval appearance on the sky, disordered motion of constituent stars, moderate to low metallicity, low gas fractions and old stellar population. Dwarf elliptical satellite galaxies in the Local Group include NGC 147, NGC 185, and NGC 205, which are satellites of our neighboring Andromeda galaxy. [19] [20]
Dwarf spheroidal satellite galaxies are characterized by their diffuse appearance, low surface brightness, high mass-to-light ratio (i.e. dark matter dominated), low metallicity, low gas fractions and old stellar population. [1] Moreover, dwarf spheroidals make up the largest population of known satellite galaxies of the Milky Way. A few of these satellites include Hercules, Pisces II and Leo IV, which are named after the constellation in which they are found. [19]
As a result of minor mergers and environmental effects, some dwarf galaxies are classified as intermediate or transitional type satellite galaxies. For example, Phoenix and LGS3 are classified as intermediate types that appear to be transitioning from dwarf irregulars to dwarf spheroidals. Furthermore, the Large Magellanic Cloud is considered to be in the process of transitioning from a dwarf spiral to a dwarf irregular. [19]
According to the standard model of cosmology (known as the ΛCDM model), the formation of satellite galaxies is intricately connected to the observed large-scale structure of the Universe. Specifically, the ΛCDM model is based on the premise that the observed large-scale structure is the result of a bottom-up hierarchical process that began after the recombination epoch in which electrically neutral hydrogen atoms were formed as a result of free electrons and protons binding together. As the ratio of neutral hydrogen to free protons and electrons grew, so did fluctuations in the baryonic matter density. These fluctuations rapidly grew to the point that they became comparable to dark matter density fluctuations. Moreover, the smaller mass fluctuations grew to nonlinearity, became virialized (i.e. reached gravitational equilibrium), and were then hierarchically clustered within successively larger bound systems. [21]
The gas within these bound systems condensed and rapidly cooled into cold dark matter halos that steadily increased in size by coalescing together and accumulating additional gas via a process known as accretion. The largest bound objects formed from this process are known as superclusters, such as the Virgo Supercluster, that contain smaller clusters of galaxies that are themselves surrounded by even smaller dwarf galaxies. Furthermore, in this model dwarfs galaxies are considered to be the fundamental building blocks that give rise to more massive galaxies, and the satellites that are observed around these galaxies are the dwarfs that have yet to be consumed by their host. [22]
A crude yet useful method to determine how dark matter halos progressively gain mass through mergers of less massive halos can be explained using the excursion set formalism, also known as the extended Press-Schechter formalism (EPS). [23] Among other things, the EPS formalism can be used to infer the fraction of mass that originated from collapsed objects of a specific mass at an earlier time by applying the statistics of Markovian random walks to the trajectories of mass elements in -space, where and represent the mass variance and overdensity, respectively.
In particular the EPS formalism is founded on the ansatz that states "the fraction of trajectories with a first upcrossing of the barrier at is equal to the mass fraction at time that is incorporated in halos with masses ". [24] Consequently, this ansatz ensures that each trajectory will upcross the barrier given some arbitrarily large , and as a result it guarantees that each mass element will ultimately become part of a halo. [24]
Furthermore, the fraction of mass that originated from collapsed objects of a specific mass at an earlier time can be used to determine average number of progenitors at time within the mass interval that have merged to produce a halo of at time . This is accomplished by considering a spherical region of mass with a corresponding mass variance and linear overdensity , where is the linear growth rate that is normalized to unity at time and is the critical overdensity at which the initial spherical region has collapsed to form a virialized object. [24] Mathematically, the progenitor mass function is expressed as:where and is the Press-Schechter multiplicity function that describes the fraction of mass associated with halos in a range . [24]
Various comparisons of the progenitor mass function with numerical simulations have concluded that good agreement between theory and simulation is obtained only when is small, otherwise the mass fraction in high mass progenitors is significantly underestimated, which can be attributed to the crude assumptions such as assuming a perfectly spherical collapse model and using a linear density field as opposed to a non-linear density field to characterize collapsed structures. [25] [26] Nevertheless, the utility of the EPS formalism is that it provides a computationally friendly approach for determining properties of dark matter halos.
Another utility of the EPS formalism is that it can be used to determine the rate at which a halo of initial mass M merges with a halo with mass between M and M+ΔM. [24] This rate is given by
where , . In general the change in mass, , is the sum of a multitude of minor mergers. Nevertheless, given an infinitesimally small time interval it is reasonable to consider the change in mass to be due to a single merger events in which transitions to . [24]
Throughout their lifespan, satellite galaxies orbiting in the dark matter halo experience dynamical friction and consequently descend deeper into the gravitational potential of their host as a result of orbital decay. Throughout the course of this descent, stars in the outer region of the satellite are steadily stripped away due to tidal forces from the host galaxy. This process, which is an example of a minor merger, continues until the satellite is completely disrupted and consumed by the host galaxies. [27] Evidence of this destructive process can be observed in stellar debris streams around distant galaxies.
As satellites orbit their host and interact with each other they progressively lose small amounts of kinetic energy and angular momentum due to dynamical friction. Consequently, the distance between the host and the satellite progressively decreases in order to conserve angular momentum. This process continues until the satellite ultimately mergers with the host galaxy. Furthermore, If we assume that the host is a singular isothermal sphere (SIS) and the satellite is a SIS that is sharply truncated at the radius at which it begins to accelerate towards the host (known as the Jacobi radius), then the time that it takes for dynamical friction to result in a minor merger can be approximated as follows:where is the initial radius at , is the velocity dispersion of the host galaxy, is the velocity dispersion of the satellite and is the Coulomb logarithm defined as with , and respectively representing the maximum impact parameter, the half-mass radius and the typical relative velocity. Moreover, both the half-mass radius and the typical relative velocity can be rewritten in terms of the radius and velocity dispersion such that and . Using the Faber-Jackson relation, the velocity dispersion of satellites and their host can be estimated individually from their observed luminosity. Therefore, using the equation above it is possible to estimate the time that it takes for a satellite galaxy to be consumed by the host galaxy. [27]
In 1978, pioneering work involving the measurement of the colors of merger remnants by the astronomers Beatrice Tinsley and Richard Larson gave rise to the notion that mergers enhance star formation. Their observations showed that an anomalous blue color was associated with the merger remnants. Prior to this discovery, astronomers had already classified stars (see stellar classifications) and it was known that young, massive stars were bluer due to their light radiating at shorter wavelengths. Furthermore, it was also known that these stars live short lives due to their rapid consumption of fuel to remain in hydrostatic equilibrium. Therefore, the observation that merger remnants were associated with large populations of young, massive stars suggested that mergers induced rapid star formation (see starburst galaxy). [28] Since this discovery was made, various observations have verified that mergers do indeed induce vigorous star formation. [27] Despite major mergers being far more effective at driving star formation than minor mergers, it is known that minor mergers are significantly more common than major mergers so the cumulative effect of minor mergers over cosmic time is postulated to also contribute heavily to burst of star formation. [29]
Observations of edge-on galaxies suggest the universal presence of a thin disk, thick disk and halo component of galaxies. Despite the apparent ubiquity of these components, there is still ongoing research to determine if the thick disk and thin disk are truly distinct components. [30] Nevertheless, many theories have been proposed to explain the origin of the thick disk component, and among these theories is one that involves minor mergers. In particular, it is speculated that the preexisting thin disk component of a host galaxy is heated during a minor merger and consequently the thin disk expands to form a thicker disk component. [31]
In a chemical reaction, chemical equilibrium is the state in which both the reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system. This state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but they are equal. Thus, there are no net changes in the concentrations of the reactants and products. Such a state is known as dynamic equilibrium.
In physics, the cross section is a measure of the probability that a specific process will take place in a collision of two particles. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted σ (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process.
In physics, a Langevin equation is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.
In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, i.e., a smooth manifold whose points are probability measures defined on a common probability space. It can be used to calculate the informational difference between measurements.
The Einstein–Hilbert action in general relativity is the action that yields the Einstein field equations through the stationary-action principle. With the (− + + +) metric signature, the gravitational part of the action is given as
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
The equilibrium constant of a chemical reaction is the value of its reaction quotient at chemical equilibrium, a state approached by a dynamic chemical system after sufficient time has elapsed at which its composition has no measurable tendency towards further change. For a given set of reaction conditions, the equilibrium constant is independent of the initial analytical concentrations of the reactant and product species in the mixture. Thus, given the initial composition of a system, known equilibrium constant values can be used to determine the composition of the system at equilibrium. However, reaction parameters like temperature, solvent, and ionic strength may all influence the value of the equilibrium constant.
In general relativity, Schwarzschild geodesics describe the motion of test particles in the gravitational field of a central fixed mass that is, motion in the Schwarzschild metric. Schwarzschild geodesics have been pivotal in the validation of Einstein's theory of general relativity. For example, they provide accurate predictions of the anomalous precession of the planets in the Solar System and of the deflection of light by gravity.
In modern models of physical cosmology, a dark matter halo is a basic unit of cosmological structure. It is a hypothetical region that has decoupled from cosmic expansion and contains gravitationally bound matter. A single dark matter halo may contain multiple virialized clumps of dark matter bound together by gravity, known as subhalos. Modern cosmological models, such as ΛCDM, propose that dark matter halos and subhalos may contain galaxies. The dark matter halo of a galaxy envelops the galactic disc and extends well beyond the edge of the visible galaxy. Thought to consist of dark matter, halos have not been observed directly. Their existence is inferred through observations of their effects on the motions of stars and gas in galaxies and gravitational lensing. Dark matter halos play a key role in current models of galaxy formation and evolution. Theories that attempt to explain the nature of dark matter halos with varying degrees of success include cold dark matter (CDM), warm dark matter, and massive compact halo objects (MACHOs).
In physics, the Majorana equation is a relativistic wave equation. It is named after the Italian physicist Ettore Majorana, who proposed it in 1937 as a means of describing fermions that are their own antiparticle. Particles corresponding to this equation are termed Majorana particles, although that term now has a more expansive meaning, referring to any fermionic particle that is its own anti-particle.
In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.
The Navarro–Frenk–White (NFW) profile is a spatial mass distribution of dark matter fitted to dark matter halos identified in N-body simulations by Julio Navarro, Carlos Frenk and Simon White. The NFW profile is one of the most commonly used model profiles for dark matter halos.
In mathematics and theoretical physics, Noether's second theorem relates symmetries of an action functional with a system of differential equations. The theorem is named after its discoverer, Emmy Noether.
The Press–Schechter formalism is a mathematical model for predicting the number of objects of a certain mass within a given volume of the Universe. It was described in an academic paper by William H. Press and Paul Schechter in 1974.
In astrophysics, the virial mass is the mass of a gravitationally bound astrophysical system, assuming the virial theorem applies. In the context of galaxy formation and dark matter halos, the virial mass is defined as the mass enclosed within the virial radius of a gravitationally bound system, a radius within which the system obeys the virial theorem. The virial radius is determined using a "top-hat" model. A spherical "top hat" density perturbation destined to become a galaxy begins to expand, but the expansion is halted and reversed due to the mass collapsing under gravity until the sphere reaches equilibrium – it is said to be virialized. Within this radius, the sphere obeys the virial theorem which says that the average kinetic energy is equal to minus one half times the average potential energy, , and this radius defines the virial radius.
In mathematics and information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution. In the language of large deviations theory, Sanov's theorem identifies the rate function for large deviations of the empirical measure of a sequence of i.i.d. random variables.
The Touschek effect describes the scattering and loss of charged particles in a storage ring. It was discovered by Bruno Touschek.
The modified lognormal power-law (MLP) function is a three parameter function that can be used to model data that have characteristics of a log-normal distribution and a power law behavior. It has been used to model the functional form of the initial mass function (IMF). Unlike the other functional forms of the IMF, the MLP is a single function with no joining conditions.
Adding controlled noise from predetermined distributions is a way of designing differentially private mechanisms. This technique is useful for designing private mechanisms for real-valued functions on sensitive data. Some commonly used distributions for adding noise include Laplace and Gaussian distributions.
Chandrasekhar–Page equations describe the wave function of the spin-1/2 massive particles, that resulted by seeking a separable solution to the Dirac equation in Kerr metric or Kerr–Newman metric. In 1976, Subrahmanyan Chandrasekhar showed that a separable solution can be obtained from the Dirac equation in Kerr metric. Later, Don Page extended this work to Kerr–Newman metric, that is applicable to charged black holes. In his paper, Page notices that N. Toop also derived his results independently, as informed to him by Chandrasekhar.