Luminosity function (astronomy)

Last updated

In astronomy, a luminosity function gives the number of stars or galaxies per luminosity interval. [1] Luminosity functions are used to study the properties of large groups or classes of objects, such as the stars in clusters or the galaxies in the Local Group.

Contents

Note that the term "function" is slightly misleading, and the luminosity function might better be described as a luminosity distribution. Given a luminosity as input, the luminosity function essentially returns the abundance of objects with that luminosity (specifically, number density per luminosity interval).

Main sequence luminosity function

The main sequence luminosity function maps the distribution of main sequence stars according to their luminosity. It is used to compare star formation and death rates, and evolutionary models, with observations. Main sequence luminosity functions vary depending on their host galaxy and on selection criteria for the stars, for example in the Solar neighbourhood or the Small Magellanic Cloud. [2]

White dwarf luminosity function

The white dwarf luminosity function (WDLF) gives the number of white dwarf stars with a given luminosity. As this is determined by the rates at which these stars form and cool, it is of interest for the information it gives about the physics of white dwarf cooling and the age and history of the Galaxy. [3] [4]

Schechter luminosity function

The Schechter luminosity function [5] provides an approximation of the abundance of galaxies in a luminosity interval . The luminosity function has units of a number density per unit luminosity and is given by a power law with an exponential cut-off at high luminosity

where is a characteristic galaxy luminosity controlling the cut-off, and the normalization has units of number density.

Equivalently, this equation can be expressed in terms of log-quantities [6] with

The galaxy luminosity function may have different parameters for different populations and environments; it is not a universal function. One measurement from field galaxies is . [7]

It is often more convenient to rewrite the Schechter function in terms of magnitudes, rather than luminosities. In this case, the Schechter function becomes:

Note that because the magnitude system is logarithmic, the power law has logarithmic slope . This is why a Schechter function with is said to be flat.

Integrals of the Schechter function can be expressed via the incomplete gamma function

Historically, the Schechter luminosity function was inspired by the Press–Schechter model. [8] However, the connection between the two is not straight forward. If one assumes that every dark matter halo hosts one galaxy, then the Press-Schechter model yields a slope for galaxies instead of the value given above which is closer to -1. The reason for this failure is that large halos tend to have a large host galaxy and many smaller satellites, and small halos may not host any galaxies with stars. See, e.g., halo occupation distribution, for a more-detailed description of the halo-galaxy connection.

Related Research Articles

In astronomy, absolute magnitude is a measure of the luminosity of a celestial object on an inverse logarithmic astronomical magnitude scale. An object's absolute magnitude is defined to be equal to the apparent magnitude that the object would have if it were viewed from a distance of exactly 10 parsecs, without extinction of its light due to absorption by interstellar matter and cosmic dust. By hypothetically placing all objects at a standard reference distance from the observer, their luminosities can be directly compared among each other on a magnitude scale. For Solar System bodies that shine in reflected light, a different definition of absolute magnitude (H) is used, based on a standard reference distance of one astronomical unit.

In physics, the cross section is a measure of the probability that a specific process will take place when some kind of radiant excitation intersects a localized phenomenon. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted σ (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process.

<span class="mw-page-title-main">Optical depth</span> Physics concept

In physics, optical depth or optical thickness is the natural logarithm of the ratio of incident to transmitted radiant power through a material. Thus, the larger the optical depth, the smaller the amount of transmitted radiant power through the material. Spectral optical depth or spectral optical thickness is the natural logarithm of the ratio of incident to transmitted spectral radiant power through a material. Optical depth is dimensionless, and in particular is not a length, though it is a monotonically increasing function of optical path length, and approaches zero as the path length approaches zero. The use of the term "optical density" for optical depth is discouraged.

<span class="mw-page-title-main">Pareto distribution</span> Probability distribution

The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power-law probability distribution that is used in description of social, quality control, scientific, geophysical, actuarial, and many other types of observable phenomena; the principle originally applied to describing the distribution of wealth in a society, fitting the trend that a large portion of wealth is held by a small fraction of the population. The Pareto principle or "80-20 rule" stating that 80% of outcomes are due to 20% of causes was named in honour of Pareto, but the concepts are distinct, and only Pareto distributions with shape value of log45 ≈ 1.16 precisely reflect it. Empirical observation has shown that this 80-20 distribution fits a wide range of cases, including natural phenomena and human activities.

<span class="mw-page-title-main">Stress–energy tensor</span> Tensor describing energy momentum density in spacetime

The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.

Absorbance is defined as "the logarithm of the ratio of incident to transmitted radiant power through a sample ". Alternatively, for samples which scatter light, absorbance may be defined as "the negative logarithm of one minus absorptance, as measured on a uniform sample". The term is used in many technical areas to quantify the results of an experimental measurement. While the term has its origin in quantifying the absorption of light, it is often entangled with quantification of light which is “lost” to a detector system through other mechanisms. What these uses of the term tend to have in common is that they refer to a logarithm of the ratio of a quantity of light incident on a sample or material to that which is detected after the light has interacted with the sample.

In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).

<span class="mw-page-title-main">Stable distribution</span> Distribution of variables which satisfies a stability property under linear combinations

In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.

In mathematics, the Lerch zeta function, sometimes called the Hurwitz–Lerch zeta function, is a special function that generalizes the Hurwitz zeta function and the polylogarithm. It is named after Czech mathematician Mathias Lerch, who published a paper about the function in 1887.

Self-phase modulation (SPM) is a nonlinear optical effect of light–matter interaction. An ultrashort pulse of light, when travelling in a medium, will induce a varying refractive index of the medium due to the optical Kerr effect. This variation in refractive index will produce a phase shift in the pulse, leading to a change of the pulse's frequency spectrum.

Tensor–vector–scalar gravity (TeVeS), developed by Jacob Bekenstein in 2004, is a relativistic generalization of Mordehai Milgrom's Modified Newtonian dynamics (MOND) paradigm.

<span class="mw-page-title-main">Satellite galaxy</span> Galaxy that orbits a larger galaxy due to gravitational attraction

A satellite galaxy is a smaller companion galaxy that travels on bound orbits within the gravitational potential of a more massive and luminous host galaxy. Satellite galaxies and their constituents are bound to their host galaxy, in the same way that planets within our own solar system are gravitationally bound to the Sun. While most satellite galaxies are dwarf galaxies, satellite galaxies of large galaxy clusters can be much more massive. The Milky Way is orbited by about fifty satellite galaxies, the largest of which is the Large Magellanic Cloud.

<span class="mw-page-title-main">Initial mass function</span> Empirical function in astronomy

In astronomy, the initial mass function (IMF) is an empirical function that describes the initial distribution of masses for a population of stars during star formation. IMF not only describes the formation and evolution of individual stars, it also serves as an important link that describes the formation and evolution of galaxies.

Scalar–tensor–vector gravity (STVG) is a modified theory of gravity developed by John Moffat, a researcher at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario. The theory is also often referred to by the acronym MOG.

<span class="mw-page-title-main">Intrinsic viscosity</span>

Intrinsic viscosity is a measure of a solute's contribution to the viscosity of a solution. It should not be confused with inherent viscosity, which is the ratio of the natural logarithm of the relative viscosity to the mass concentration of the polymer.

Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The "expected shortfall at q% level" is the expected return on the portfolio in the worst of cases. ES is an alternative to value at risk that is more sensitive to the shape of the tail of the loss distribution.

The Navarro–Frenk–White (NFW) profile is a spatial mass distribution of dark matter fitted to dark matter halos identified in N-body simulations by Julio Navarro, Carlos Frenk and Simon White. The NFW profile is one of the most commonly used model profiles for dark matter halos.

In financial mathematics, tail value at risk (TVaR), also known as tail conditional expectation (TCE) or conditional tail expectation (CTE), is a risk measure associated with the more general value at risk. It quantifies the expected value of the loss given that an event outside a given probability level has occurred.

<span class="mw-page-title-main">Gravitational lensing formalism</span>

In general relativity, a point mass deflects a light ray with impact parameter by an angle approximately equal to

<span class="mw-page-title-main">Sérsic profile</span>

The Sérsic profile is a mathematical function that describes how the intensity of a galaxy varies with distance from its center. It is a generalization of de Vaucouleurs' law. José Luis Sérsic first published his law in 1963.

References

  1. Stahler, S.; Palla, F. (2004). The Formation of Stars. Wiley VCH. doi:10.1002/9783527618675. ISBN   978-3-527-61867-5.
  2. Butcher, H. (1977). "A main-sequence luminosity function for the Large Magellanic Cloud". The Astrophysical Journal. 216: 372. Bibcode:1977ApJ...216..372B. doi: 10.1086/155477 .
  3. Claver, C. F.; Winget, D. E.; Nather, R. E.; MacQueen, P. J. (1998). "The Texas Deep Sky Survey: Spectroscopy of Cool Degenerate Stars". American Astronomical Society Meeting Abstracts. 193. Bibcode:1998AAS...193.3702C.
  4. Fontaine, G.; Brassard, P.; Bergeron, P. (2001). "The Potential of White Dwarf Cosmochronology". Publications of the Astronomical Society of the Pacific. 113 (782): 409. Bibcode:2001PASP..113..409F. doi: 10.1086/319535 . S2CID   54970082.
  5. Schechter, P. (1976-01-01). "An analytic expression for the luminosity function for galaxies". The Astrophysical Journal. 203: 297–306. Bibcode:1976ApJ...203..297S. doi:10.1086/154079. ISSN   0004-637X.
  6. Sobral, David; Smail, Ian; Best, Philip N.; Geach, James E.; Matsuda, Yuichi; Stott, John P.; Cirasuolo, Michele; Kurk, Jaron (2013-01-01). "A large Hα survey at z = 2.23, 1.47, 0.84 and 0.40: the 11 Gyr evolution of star-forming galaxies from HiZELS★". Monthly Notices of the Royal Astronomical Society. 428 (2): 1128–1146. arXiv: 1202.3436 . Bibcode:2013MNRAS.428.1128S. doi: 10.1093/mnras/sts096 . ISSN   0035-8711.
  7. Longair, Malcolm (1998). Galaxy Formation. Springer-Verlag. ISBN   978-3-540-63785-1.
  8. Barkana, Rennan (2018). The Encyclopedia of Cosmology, Volume 1: Galaxy Formation and Evolution. Vol. 1. World Scientific. doi:10.1142/9496. ISBN   9789814656221. S2CID   259542973.