Instrumental magnitude

Last updated

Instrumental magnitude refers to an uncalibrated apparent magnitude, and, like its counterpart, it refers to the brightness of an astronomical object seen from an observer on Earth, but unlike its counterpart, it is only useful in relative comparisons to other astronomical objects in the same image (assuming the photometric calibration does not spatially vary across the image; in the case of images from the Palomar Transient Factory, the absolute photometric calibration involves a zero point that varies over the image by up to 0.16 magnitudes to make a required illumination correction [1] ). Instrumental magnitude is defined in various ways, and so when working with instrumental magnitudes, it is important to know how they are defined. The most basic definition of instrumental magnitude, , is given by

where is the intensity of the source object in known physical units. For example, in the paper by Mighell, [2] it was assumed that the data are in units of electron number (generated within pixels of a charge-coupled device). The physical units of the source intensity are thus part of the definition required for any instrumental magnitudes that are employed. The factor of 2.5 in the above formula originates from the established fact that the human eye can only clearly distinguish the brightness of two objects if one is at least approximately 2.5 times brighter than the other. [3] The instrumental magnitude is defined such that two objects with a brightness ratio of exactly 100 will differ by precisely 5 magnitudes, and this is based on Pogson's system of defining each successive magnitude as being fainter by . We can now relate this to the base-10 logarithmic function and the leading coefficient in the above formula:

The approximate value of 2.5 is used as a convenience, its negative sign assures that brighter objects will have smaller and possibly negative values, and tabulated values of base-10 logarithms were available more than three centuries before the advent of computers and calculators.

Related Research Articles

Apparent magnitude brightness of a celestial object observed from the Earth

Apparent magnitude is a measure of the brightness of a star or other astronomical object observed from Earth. An object's apparent magnitude depends on its intrinsic luminosity, its distance from Earth, and any extinction of the object's light caused by interstellar dust along the line of sight to the observer.

Absolute magnitude is a measure of the luminosity of a celestial object, on an inverse logarithmic astronomical magnitude scale. An object's absolute magnitude is defined to be equal to the apparent magnitude that the object would have if it were viewed from a distance of exactly 10 parsecs, without extinction of its light due to absorption by interstellar matter and cosmic dust. By hypothetically placing all objects at a standard reference distance from the observer, their luminosities can be directly comparedon a magnitude scale.

Parsec Unit of length used in astronomy

The parsec is a unit of length used to measure the large distances to astronomical objects outside the Solar System, approximately equal to 3.26 light-years or 206,000 astronomical units, i. e. 30.9 trillion kilometres. Parsec is obtained by the use of parallax and trigonometry, and is defined as the distance at which one astronomical unit subtends an angle of one arcsecond. This corresponds to 648000/π astronomical units, i.e. 1 pc = 1/tan(1″) au. The nearest star, Proxima Centauri, is about 1.3 parsecs from the Sun. Most of the stars visible to the unaided eye in the night sky are within 500 parsecs of the Sun.

Luminosity

Luminosity is an absolute measure of radiated electromagnetic power (light), the radiant power emitted by a light-emitting object.

Gamma correction, or often simply gamma, is a nonlinear operation used to encode and decode luminance or tristimulus values in video or still image systems. Gamma correction is, in the simplest cases, defined by the following power-law expression:

Photometry (astronomy)

Photometry, from Greek photo- ("light") and -metry ("measure"), is a technique used in astronomy that is concerned with measuring the flux or intensity of light radiated by astronomical objects. This light is measured through a telescope using a photometer, often made using electronic devices such as a CCD photometer or a photoelectric photometer that converts light into an electric current by the photoelectric effect. When calibrated against standard stars of known intensity and colour, photometers can measure the brightness or apparent magnitude of celestial objects.

Exposure value Measure of illuminance for a combination of a cameras shutter speed and f-number

In photography, exposure value (EV) is a number that represents a combination of a camera's shutter speed and f-number, such that all combinations that yield the same exposure have the same EV. Exposure value is also used to indicate an interval on the photographic exposure scale, with a difference of 1 EV corresponding to a standard power-of-2 exposure step, commonly referred to as a stop.

Cosmic distance ladder

The cosmic distance ladder is the succession of methods by which astronomers determine the distances to celestial objects. A real direct distance measurement of an astronomical object is possible only for those objects that are "close enough" to Earth. The techniques for determining distances to more distant objects are all based on various measured correlations between methods that work at close distances and methods that work at larger distances. Several methods rely on a standard candle, which is an astronomical object that has a known luminosity.

Illuminance

In photometry, illuminance is the total luminous flux incident on a surface, per unit area. It is a measure of how much the incident light illuminates the surface, wavelength-weighted by the luminosity function to correlate with human brightness perception. Similarly, luminous emittance is the luminous flux per unit area emitted from a surface. Luminous emittance is also known as luminous exitance.

In photography reciprocity is the inverse relationship between the intensity and duration of light that determines the reaction of light-sensitive material. Within a normal exposure range for film stock, for example, the reciprocity law states that the film response will be determined by the total exposure, defined as intensity × time. Therefore, the same response can result from reducing duration and increasing light intensity, and vice versa.

In astronomy, surface brightness quantifies the apparent brightness or flux density per unit angular area of a spatially extended object such as a galaxy or nebula, or of the night sky background. An object's surface brightness depends on its surface luminosity density, i.e., its luminosity emitted per unit surface area. In visible and infrared astronomy, surface brightness is often quoted on a magnitude scale, in magnitudes per square arcsecond in a particular filter band or photometric system.

Magnitude (astronomy)

In astronomy, magnitude is a unitless measure of the brightness of an object in a defined passband, often in the visible or infrared spectrum, but sometimes across all wavelengths. An imprecise but systematic determination of the magnitude of objects was introduced in ancient times by Hipparchus.

Photographic magnitude is a measure of the relative brightness of a star or other astronomical object as imaged on a photographic film emulsion with a camera attached to a telescope. An object's apparent photographic magnitude depends on its intrinsic luminosity, its distance and any extinction of light by interstellar matter existing along the line of sight to the observer.

In astronomy, color–color diagrams are a means of comparing the apparent magnitudes of stars at different wavelengths. Astronomers typically observe at narrow bands around certain wavelengths, and objects observed will have different brightnesses in each band. The difference in brightness between two bands is referred to as color. On color–color diagrams, the color defined by two wavelength bands is plotted on the horizontal axis, and then the color defined by another brightness difference will be plotted on the vertical axis.

The AB magnitude system is an astronomical magnitude system. Unlike many other magnitude systems, it is based on flux measurements that are calibrated in absolute units, namely spectral flux densities.

In astronomical photometry, the Ultraviolet and Optical Telescope (UVOT) on the Neil Gehrels Swift Observatory observes astronomical objects in its 17-by-17 arc minute field of view through one of several filters or grisms. The seven filters, which are similar to those on the XMM-Newton-OM instrument, cover the near-ultraviolet and optical range. The brightness of an object observed in the three optical filters, called u, b, and v, can be converted into the more common Morgan-Johnson magnitudes. The three ultraviolet filters probe a spectral region that is not observable from the ground.

The Richter scale – also called the Richter magnitude scale or Richter's magnitude scale – is a measure of the strength of earthquakes, developed by Charles F. Richter and presented in his landmark 1935 paper, where he called it the "magnitude scale". This was later revised and renamed the local magnitude scale, denoted as ML or ML .

The Palomar Transient Factory, was an astronomical survey using a wide-field survey camera designed to search for optical transient and variable sources such as variable stars, supernovae, asteroids and comets. The project completed commissioning in summer 2009, and continued until December 2012. It has since been succeeded by the Intermediate Palomar Transient Factory (iPTF), which itself transitioned to the Zwicky Transient Facility in 2017/18. All three surveys are registered at the MPC under the same observatory code for their astrometric observations.

In astronomy, the color index is a simple numerical expression that determines the color of an object, which in the case of a star gives its temperature. The smaller the color index, the more blue the object is. Conversely, the larger the color index, the more red the object is. This is a consequence of the logarithmic magnitude scale, in which brighter objects have smaller magnitudes than dimmer ones. For comparison, the yellowish Sun has a B−V index of 0.656 ± 0.005, whereas the bluish Rigel has a B−V of −0.03. Traditionally, the color index uses Vega as a zero point.

In astronomy, the Zero Point in a photometric system is defined as the magnitude of an object that produces 1 count per second on the detector. The zero point is used to calibrate a system to the standard magnitude system, as the flux detected from stars will vary from detector to detector. Traditionally, Vega is used as the calibration star for the zero point magnitude in specific pass bands, although often, an average of multiple stars is used for higher accuracy. It is not often practical to find Vega in the sky to calibrate the detector, so for general purposes, any star may be used in the sky that has a known apparent magnitude.

References

  1. Ofek, E. O.; Laher, R.; Law, N.; et al. (2012). "The Palomar Transient Factory Photometric Calibration". Publications of the Astronomical Society of the Pacific . 124 (911): 62–73. arXiv: 1112.4851 . Bibcode:2012PASP..124...62O. doi:10.1086/664065. S2CID   20527550.
  2. Mighell, Kenneth J. (1999). "Algorithms for CCD Stellar Photometry". ASP Conference Series . 172: 317–328.
  3. Harwit, Martin (1982), Astrophysical Concepts, Concepts, pp. 508–9, ISBN   0-910533-00-8