Graphical timeline of the Big Bang

Last updated

This timeline of the Big Bang shows a sequence of events as currently theorized by scientists.

It is a logarithmic scale that shows second instead of second. For example, one microsecond is . To convert −30 read on the scale to second calculate second = one millisecond. On a logarithmic time scale a step lasts ten times longer than the previous step.

Chronology of the universe#Habitable epochCosmic microwave background radiationChronology of the universe#Matter dominationChronology of the universe#Recombination, photon decoupling, and the cosmic microwave background (CMB)Big Bang nucleosynthesisInflationary epochPlanck timeChronology of the universe#Dark AgesPhoton epochLepton epochHadron epochQuark epochElectroweak epochGrand unification epochThe Five Ages of the UniverseReionizationGraphical timeline of the Stelliferous EraBig BangPlanck epochGraphical timeline of the Big Bang

See also

Related Research Articles

In astronomy, absolute magnitude is a measure of the luminosity of a celestial object on an inverse logarithmic astronomical magnitude scale. An object's absolute magnitude is defined to be equal to the apparent magnitude that the object would have if it were viewed from a distance of exactly 10 parsecs, without extinction of its light due to absorption by interstellar matter and cosmic dust. By hypothetically placing all objects at a standard reference distance from the observer, their luminosities can be directly compared among each other on a magnitude scale. For Solar System bodies that shine in reflected light, a different definition of absolute magnitude (H) is used, based on a standard reference distance of one astronomical unit.

<span class="mw-page-title-main">Analysis of algorithms</span> Study of resources used by an algorithm

In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes or the number of storage locations it uses. An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.

The decibel is a relative unit of measurement equal to one tenth of a bel (B). It expresses the ratio of two values of a power or root-power quantity on a logarithmic scale. Two signals whose levels differ by one decibel have a power ratio of 101/10 or root-power ratio of 10120.

<span class="mw-page-title-main">Logarithm</span> Mathematical function, inverse of an exponential function

In mathematics, the logarithm is the inverse function to exponentiation. That means that the logarithm of a number x to the base b is the exponent to which b must be raised to produce x. For example, since 1000 = 103, the logarithm base 10 of 1000 is 3, or log10 (1000) = 3. The logarithm of x to base b is denoted as logb (x), or without parentheses, logbx, or even without the explicit base, log x, when no confusion is possible, or when the base does not matter such as in big O notation.

An order of magnitude is an approximation of the logarithm of a value relative to some contextually understood reference value, usually 10, interpreted as the base of the logarithm and the representative of values of magnitude one. Logarithmic distributions are common in nature and considering the order of magnitude of values sampled from such a distribution can be more intuitive. When the reference value is 10, the order of magnitude can be understood as the number of digits in the base-10 representation of the value. Similarly, if the reference value is one of some powers of 2, since computers store data in a binary format, the magnitude can be understood in terms of the amount of computer memory needed to store that value.

In information theory, the Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy-channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise. The theorem establishes Shannon's channel capacity for such a communication link, a bound on the maximum amount of error-free information per time unit that can be transmitted with a specified bandwidth in the presence of the noise interference, assuming that the signal power is bounded, and that the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley.

<span class="mw-page-title-main">Logarithmic scale</span> Measurement scale based on orders of magnitude

A logarithmic scale is a way of displaying numerical data over a very wide range of values in a compact way. As opposed to a linear number line in which every unit of distance corresponds to adding by the same amount, on a logarithmic scale, every unit of length corresponds to multiplying the previous value by the same amount. Hence, such a scale is nonlinear. In nonlinear scale, the numbers 1, 2, 3, 4, 5, and so on would not be equally spaced. Rather, the numbers 10, 100, 1000, 10000, and 100000 would be equally spaced. Likewise, the numbers 2, 4, 8, 16, 32, and so on, would be equally spaced. Often exponential growth curves are displayed on a log scale, otherwise they would increase too quickly to fit within a small graph.

A logarithmic timeline is a timeline laid out according to a logarithmic scale. This necessarily implies a zero point and an infinity point, neither of which can be displayed. The most natural zero point is the Big Bang, looking forward, but the most common is the ever-changing present, looking backward.

<span class="mw-page-title-main">Mental calculation</span> Arithmetical calculations using only the human brain

Mental calculation consists of arithmetical calculations using only the human brain, with no help from any supplies or devices such as a calculator. People may use mental calculation when computing tools are not available, when it is faster than other means of calculation, or even in a competitive context. Mental calculation often involves the use of specific techniques devised for specific types of problems. People with unusually high ability to perform mental calculations are called mental calculators or lightning calculators.

The moment magnitude scale is a measure of an earthquake's magnitude based on its seismic moment. Mw was defined in a 1979 paper by Thomas C. Hanks and Hiroo Kanamori. Similar to the local magnitude/Richter scale (ML ) defined by Charles Francis Richter in 1935, it uses a logarithmic scale; small earthquakes have approximately the same magnitudes on both scales. Despite the difference, news media often says "Richter scale" when referring to the moment magnitude scale.

<span class="mw-page-title-main">Horizon problem</span> Cosmological fine-tuning problem

The horizon problem is a cosmological fine-tuning problem within the Big Bang model of the universe. It arises due to the difficulty in explaining the observed homogeneity of causally disconnected regions of space in the absence of a mechanism that sets the same initial conditions everywhere. It was first pointed out by Wolfgang Rindler in 1956.

A cosmological decade () is a division of the lifetime of the cosmos. The divisions are logarithmic in size, with base 10. Each successive cosmological decade represents a ten-fold increase in the total age of the universe.

This more than 20-billion-year timeline of our universe shows the best estimates of major events from the universe's beginning to anticipated future events. Zero on the scale is the present day. A large step on the scale is one billion years; a small step, one hundred million years. The past is denoted by a minus sign: e.g., the oldest rock on Earth was formed about four billion years ago and this is marked at -4e+09 years, where 4e+09 represents 4 times 10 to the power of 9. The "Big Bang" event most likely happened 13.8 billion years ago; see age of the universe.

<span class="mw-page-title-main">Decade (log scale)</span> Unit for measuring ratios on a logarithmic scale

One decade is a unit for measuring ratios on a logarithmic scale, with one decade corresponding to a ratio of 10 between two numbers.

<span class="mw-page-title-main">Regular number</span> Numbers that evenly divide powers of 60

Regular numbers are numbers that evenly divide powers of 60 (or, equivalently, powers of 30). Equivalently, they are the numbers whose only prime divisors are 2, 3, and 5. As an example, 602 = 3600 = 48 × 75, so as divisors of a power of 60 both 48 and 75 are regular.

This is the timeline of the stelliferous era but also partly charts the primordial era, and charts more of the degenerate era of the heat death scenario.

This is the timeline of the Universe from Big Bang to Heat Death scenario. The different eras of the universe are shown. The heat death will occur in around 1.7×10106 years, if protons decay.

The chronology of the universe describes the history and future of the universe according to Big Bang cosmology.

The Richter scale, also called the Richter magnitude scale, Richter's magnitude scale, and the Gutenberg–Richter scale, is a measure of the strength of earthquakes, developed by Charles Francis Richter in collaboration with Beno Gutenberg, and presented in Richter's landmark 1935 paper, where he called it the "magnitude scale". This was later revised and renamed the local magnitude scale, denoted as ML or ML .

A logarithmic resistor ladder is an electronic circuit, composed of a series of resistors and switches, designed to create an attenuation from an input to an output signal, where the logarithm of the attenuation ratio is proportional to a binary number that represents the state of the switches.

References