In mathematics, many logarithmic identities exist. The following is a compilation of the notable of these, many of which are used for computational purposes.
Trivial mathematical identities are relatively simple (for an experienced mathematician), though not necessarily unimportant. Trivial logarithmic identities are:
because | ||
because |
This section may overuse or misuse color, making it hard to understand for color-blind users.(February 2024) |
By definition, we know that:
where and .
Setting , we can see that: . So, substituting these values into the formula, we see that: , which gets us the first property.
Setting , we can see that: . So, substituting these values into the formula, we see that: , which gets us the second property.
This section may overuse or misuse color, making it hard to understand for color-blind users.(February 2024) |
Logarithms and exponentials with the same base cancel each other. This is true because logarithms and exponentials are inverse operations—much like the same way multiplication and division are inverse operations, and addition and subtraction are inverse operations.
Both of the above are derived from the following two equations that define a logarithm: (note that in this explanation, the variables of and may not be referring to the same number)
Looking at the equation , and substituting the value for of , we get the following equation: , which gets us the first equation. Another more rough way to think about it is that , and that that "" is .
Looking at the equation , and substituting the value for of , we get the following equation: , which gets us the second equation. Another more rough way to think about it is that , and that that something "" is .
Logarithms can be used to make calculations easier. For example, two numbers can be multiplied just by using a logarithm table and adding. These are often known as logarithmic properties, which are documented in the table below. [2] The first three operations below assume that x = bc and/or y = bd, so that logb(x) = c and logb(y) = d. Derivations also use the log definitions x = blogb(x) and x = logb(bx).
because | ||
because | ||
because | ||
because | ||
because | ||
because |
Where , , and are positive real numbers and , and and are real numbers.
The laws result from canceling exponentials and the appropriate law of indices. Starting with the first law:
The law for powers exploits another of the laws of indices:
The law relating to quotients then follows:
Similarly, the root law is derived by rewriting the root as a reciprocal power:
These are the three main logarithm laws/rules/principles, [3] from which the other properties listed above can be proven. Each of these logarithm properties correspond to their respective exponent law, and their derivations/proofs will hinge on those facts. There are multiple ways to derive/prove each logarithm law – this is just one possible method.
To state the logarithm of a product law formally:
Derivation:
Let , where , and let . We want to relate the expressions and . This can be done more easily by rewriting in terms of exponentials, whose properties we already know. Additionally, since we are going to refer to and quite often, we will give them some variable names to make working with them easier: Let , and let .
Rewriting these as exponentials, we see that
From here, we can relate (i.e. ) and (i.e. ) using exponent laws as
To recover the logarithms, we apply to both sides of the equality.
The right side may be simplified using one of the logarithm properties from before: we know that , giving
We now resubstitute the values for and into our equation, so our final expression is only in terms of , , and .
This completes the derivation.
To state the logarithm of a quotient law formally:
Derivation:
Let , where , and let .
We want to relate the expressions and . This can be done more easily by rewriting in terms of exponentials, whose properties we already know. Additionally, since we are going to refer to and quite often, we will give them some variable names to make working with them easier: Let , and let .
Rewriting these as exponentials, we see that:
From here, we can relate (i.e. ) and (i.e. ) using exponent laws as
To recover the logarithms, we apply to both sides of the equality.
The right side may be simplified using one of the logarithm properties from before: we know that , giving
We now resubstitute the values for and into our equation, so our final expression is only in terms of , , and .
This completes the derivation.
To state the logarithm of a power law formally:
Derivation:
Let , where , let , and let . For this derivation, we want to simplify the expression . To do this, we begin with the simpler expression . Since we will be using often, we will define it as a new variable: Let .
To more easily manipulate the expression, we rewrite it as an exponential. By definition, , so we have
Similar to the derivations above, we take advantage of another exponent law. In order to have in our final expression, we raise both sides of the equality to the power of :
where we used the exponent law .
To recover the logarithms, we apply to both sides of the equality.
The left side of the equality can be simplified using a logarithm law, which states that .
Substituting in the original value for , rearranging, and simplifying gives
This completes the derivation.
To state the change of base logarithm formula formally:
This identity is useful to evaluate logarithms on calculators. For instance, most calculators have buttons for ln and for log10, but not all calculators have buttons for the logarithm of an arbitrary base.
Let , where Let . Here, and are the two bases we will be using for the logarithms. They cannot be 1, because the logarithm function is not well defined for the base of 1.[ citation needed ] The number will be what the logarithm is evaluating, so it must be a positive number. Since we will be dealing with the term quite frequently, we define it as a new variable: Let .
To more easily manipulate the expression, it can be rewritten as an exponential.
Applying to both sides of the equality,
Now, using the logarithm of a power property, which states that ,
Isolating , we get the following:
Resubstituting back into the equation,
This completes the proof that .
This formula has several consequences:
where is any permutation of the subscripts 1, ..., n. For example
The following summation/subtraction rule is especially useful in probability theory when one is dealing with a sum of log-probabilities:
because | ||
because |
Note that the subtraction identity is not defined if , since the logarithm of zero is not defined. Also note that, when programming, and may have to be switched on the right hand side of the equations if to avoid losing the "1 +" due to rounding errors. Many programming languages have a specific log1p(x)
function that calculates without underflow (when is small).
More generally:
A useful identity involving exponents: or more universally:
All are accurate around , but not for large numbers.
The last limit is often summarized as "logarithms grow more slowly than any power or root of x".
for and is a sample point in each interval.
The natural logarithm has a well-known Taylor series [7] expansion that converges for in the open-closed interval :
Within this interval, for , the series is conditionally convergent, and for all other values, it is absolutely convergent. For or , the series does not converge to . In these cases, different representations or methods must be used to evaluate the logarithm.
It is not uncommon in advanced mathematics, particularly in analytic number theory and asymptotic analysis, to encounter expressions involving differences or ratios of harmonic numbers at scaled indices. [8] The identity involving the limiting difference between harmonic numbers at scaled indices and its relationship to the logarithmic function provides an intriguing example of how discrete sequences can asymptotically relate to continuous functions. This identity is expressed as [9]
which characterizes the behavior of harmonic numbers as they grow large. This approximation (which precisely equals in the limit) reflects how summation over increasing segments of the harmonic series exhibits integral properties, giving insight into the interplay between discrete and continuous analysis. It also illustrates how understanding the behavior of sums and series at large scales can lead to insightful conclusions about their properties. Here denotes the -th harmonic number, defined as
The harmonic numbers are a fundamental sequence in number theory and analysis, known for their logarithmic growth. This result leverages the fact that the sum of the inverses of integers (i.e., harmonic numbers) can be closely approximated by the natural logarithm function, plus a constant, especially when extended over large intervals. [10] [8] [11] As tends towards infinity, the difference between the harmonic numbers and converges to a non-zero value. This persistent non-zero difference, , precludes the possibility of the harmonic series approaching a finite limit, thus providing a clear mathematical articulation of its divergence. [12] [13] The technique of approximating sums by integrals (specifically using the integral test or by direct integral approximation) is fundamental in deriving such results. This specific identity can be a consequence of these approximations, considering:
The limit explores the growth of the harmonic numbers when indices are multiplied by a scaling factor and then differenced. It specifically captures the sum from to :
This can be estimated using the integral test for convergence, or more directly by comparing it to the integral of from to :
As the window's lower bound begins at and the upper bound extends to , both of which tend toward infinity as , the summation window encompasses an increasingly vast portion of the smallest possible terms of the harmonic series (those with astronomically large denominators), creating a discrete sum that stretches towards infinity, which mirrors how continuous integrals accumulate value across an infinitesimally fine partitioning of the domain. In the limit, the interval is effectively from to where the onset implies this minimally discrete region.
The harmonic number difference formula for is an extension [9] of the classic, alternating identity of :
which can be generalized as the double series over the residues of :
where is the principle ideal generated by . Subtracting from each term (i.e., balancing each term with the modulus) reduces the magnitude of each term's contribution, ensuring convergence by controlling the series' tendency toward divergence as increases. For example:
This method leverages the fine differences between closely related terms to stabilize the series. The sum over all residues ensures that adjustments are uniformly applied across all possible offsets within each block of terms. This uniform distribution of the "correction" across different intervals defined by functions similarly to telescoping over a very large sequence. It helps to flatten out the discrepancies that might otherwise lead to divergent behavior in a straightforward harmonic series.
A fundamental feature of the proof is the accumulation of the subtrahends into a unit fraction, that is, for , thus rather than , where the extrema of are if and otherwise, with the minimum of being implicit in the latter case due to the structural requirements of the proof. Since the cardinality of depends on the selection of one of two possible minima, the integral , as a set-theoretic procedure, is a function of the maximum (which remains consistent across both interpretations) plus , not the cardinality (which is ambiguous [14] [15] due to varying definitions of the minimum). Whereas the harmonic number difference computes the integral in a global sliding window, the double series, in parallel, computes the sum in a local sliding window—a shifting -tuple—over the harmonic series, advancing the window by positions to select the next -tuple, and offsetting each element of each tuple by relative to the window's absolute position. The sum corresponds to which scales without bound. The sum corresponds to the prefix trimmed from the series to establish the window's moving lower bound , and is the limit of the sliding window (the scaled, truncated [16] series):
To remember higher integrals, it is convenient to define
where is the nth harmonic number:
Then
The identities of logarithms can be used to approximate large numbers. Note that logb(a) + logb(c) = logb(ac), where a, b, and c are arbitrary constants. Suppose that one wants to approximate the 44th Mersenne prime, 232,582,657−1. To get the base-10 logarithm, we would multiply 32,582,657 by log10(2), getting 9,808,357.09543 = 9,808,357 + 0.09543. We can then get 109,808,357× 100.09543 ≈ 1.25 × 109,808,357.
Similarly, factorials can be approximated by summing the logarithms of the terms.
The complex logarithm is the complex number analogue of the logarithm function. No single valued function on the complex plane can satisfy the normal rules for logarithms. However, a multivalued function can be defined which satisfies most of the identities. It is usual to consider this as a function defined on a Riemann surface. A single valued version, called the principal value of the logarithm, can be defined which is discontinuous on the negative x axis, and is equal to the multivalued version on a single branch cut.
In what follows, a capital first letter is used for the principal value of functions, and the lower case version is used for the multivalued function. The single valued version of definitions and identities is always given first, followed by a separate section for the multiple valued versions.
The multiple valued version of log(z) is a set, but it is easier to write it without braces and using it in formulas follows obvious rules.
When k is any integer:
Principal value forms:
Multiple value forms, for any k an integer:
Principal value forms:
Multiple value forms:
A complex power of a complex number can have many possible values.
Principal value form:
Multiple value forms:
Where k1, k2 are any integers:
The exponential function is a mathematical function denoted by or . Unless otherwise specified, the term generally refers to the positive-valued function of a real variable, although it can be extended to the complex numbers or generalized to other mathematical objects like matrices or Lie algebras. The exponential function originated from the operation of taking powers of a number, but various modern definitions allow it to be rigorously extended to all real arguments , including irrational numbers. Its ubiquitous occurrence in pure and applied mathematics led mathematician Walter Rudin to consider the exponential function to be "the most important function in mathematics".
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.
The natural logarithm of a number is its logarithm to the base of the mathematical constant e, which is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is generally written as ln x, logex, or sometimes, if the base e is implicit, simply log x. Parentheses are sometimes added for clarity, giving ln(x), loge(x), or log(x). This is done particularly when the argument to the logarithm is not a single symbol, so as to prevent ambiguity.
In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.
Euler's constant is a mathematical constant, usually denoted by the lowercase Greek letter gamma, defined as the limiting difference between the harmonic series and the natural logarithm, denoted here by log:
In mathematics, Stirling's approximation is an asymptotic approximation for factorials. It is a good approximation, leading to accurate results even for small values of . It is named after James Stirling, though a related but less precise result was first stated by Abraham de Moivre.
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.
In mathematics, the n-th harmonic number is the sum of the reciprocals of the first n natural numbers:
In mathematics, the inverse trigonometric functions are the inverse functions of the trigonometric functions. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.
In mathematics, a Dirichlet series is any series of the form where s is complex, and is a complex sequence. It is a special case of general Dirichlet series.
In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:
In mathematics, the integral test for convergence is a method used to test infinite series of monotonous terms for convergence. It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the Maclaurin–Cauchy test.
In mathematics, tetration is an operation based on iterated, or repeated, exponentiation. There is no standard notation for tetration, though Knuth's up arrow notation and the left-exponent xb are common.
In mathematics, the polylogarithm (also known as Jonquière's function, for Alfred Jonquière) is a special function Lis(z) of order s and argument z. Only for special values of s does the polylogarithm reduce to an elementary function such as the natural logarithm or a rational function. In quantum statistics, the polylogarithm function appears as the closed form of integrals of the Fermi–Dirac distribution and the Bose–Einstein distribution, and is also known as the Fermi–Dirac integral or the Bose–Einstein integral. In quantum electrodynamics, polylogarithms of positive integer order arise in the calculation of processes represented by higher-order Feynman diagrams.
In mathematics, the exponential function can be characterized in many ways. This article presents some common characterizations, discusses why each makes sense, and proves that they are all equivalent.
In mathematics, a complex logarithm is a generalization of the natural logarithm to nonzero complex numbers. The term refers to one of the following, which are strongly related:
In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: if each box of a given product contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as . For example, when n = 50 it takes about 225 trials on average to collect all 50 coupons.
In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.
The decimal value of the natural logarithm of 2 is approximately
{{cite web}}
: CS1 maint: archived copy as title (link)