List of logarithmic identities

Last updated

In mathematics, many logarithmic identities exist. The following is a compilation of the notable of these, many of which are used for computational purposes.

Contents

Trivial identities

Trivial mathematical identities are relatively simple (for an experienced mathematician), though not necessarily unimportant. Trivial logarithmic identities are:

because
because

Explanations

By definition, we know that:

,

where and .

Setting , we can see that: . So, substituting these values into the formula, we see that: , which gets us the first property.

Setting , we can see that: . So, substituting these values into the formula, we see that: , which gets us the second property.

Cancelling exponentials

Logarithms and exponentials with the same base cancel each other. This is true because logarithms and exponentials are inverse operations—much like the same way multiplication and division are inverse operations, and addition and subtraction are inverse operations.

[1]

Both of the above are derived from the following two equations that define a logarithm: (note that in this explanation, the variables of and may not be referring to the same number)

Looking at the equation , and substituting the value for of , we get the following equation: , which gets us the first equation. Another more rough way to think about it is that , and that that "" is .

Looking at the equation , and substituting the value for of , we get the following equation: , which gets us the second equation. Another more rough way to think about it is that , and that that something "" is .

Using simpler operations

Logarithms can be used to make calculations easier. For example, two numbers can be multiplied just by using a logarithm table and adding. These are often known as logarithmic properties, which are documented in the table below. [2] The first three operations below assume that x = bc and/or y = bd, so that logb(x) = c and logb(y) = d. Derivations also use the log definitions x = blogb(x) and x = logb(bx).

because
because
because
because
because
because

Where , , and are positive real numbers and , and and are real numbers.

The laws result from canceling exponentials and the appropriate law of indices. Starting with the first law:

The law for powers exploits another of the laws of indices:

The law relating to quotients then follows:

Similarly, the root law is derived by rewriting the root as a reciprocal power:

Derivations of product, quotient, and power rules

These are the three main logarithm laws/rules/principles, [3] from which the other properties listed above can be proven. Each of these logarithm properties correspond to their respective exponent law, and their derivations/proofs will hinge on those facts. There are multiple ways to derive/prove each logarithm law – this is just one possible method.

Logarithm of a product

To state the logarithm of a product law formally:

Derivation:

Let , where , and let . We want to relate the expressions and . This can be done more easily by rewriting in terms of exponentials, whose properties we already know. Additionally, since we are going to refer to and quite often, we will give them some variable names to make working with them easier: Let , and let .

Rewriting these as exponentials, we see that

From here, we can relate (i.e. ) and (i.e. ) using exponent laws as

To recover the logarithms, we apply to both sides of the equality.

The right side may be simplified using one of the logarithm properties from before: we know that , giving

We now resubstitute the values for and into our equation, so our final expression is only in terms of , , and .

This completes the derivation.

Logarithm of a quotient

To state the logarithm of a quotient law formally:

Derivation:

Let , where , and let .

We want to relate the expressions and . This can be done more easily by rewriting in terms of exponentials, whose properties we already know. Additionally, since we are going to refer to and quite often, we will give them some variable names to make working with them easier: Let , and let .

Rewriting these as exponentials, we see that:

From here, we can relate (i.e. ) and (i.e. ) using exponent laws as

To recover the logarithms, we apply to both sides of the equality.

The right side may be simplified using one of the logarithm properties from before: we know that , giving

We now resubstitute the values for and into our equation, so our final expression is only in terms of , , and .

This completes the derivation.

Logarithm of a power

To state the logarithm of a power law formally,

Derivation:

Let , where , let , and let . For this derivation, we want to simplify the expression . To do this, we begin with the simpler expression . Since we will be using often, we will define it as a new variable: Let .

To more easily manipulate the expression, we rewrite it as an exponential. By definition, , so we have

Similar to the derivations above, we take advantage of another exponent law. In order to have in our final expression, we raise both sides of the equality to the power of :

where we used the exponent law .

To recover the logarithms, we apply to both sides of the equality.

The left side of the equality can be simplified using a logarithm law, which states that .

Substituting in the original value for , rearranging, and simplifying gives

This completes the derivation.

Changing the base

To state the change of base logarithm formula formally:

This identity is useful to evaluate logarithms on calculators. For instance, most calculators have buttons for ln and for log10, but not all calculators have buttons for the logarithm of an arbitrary base.

Proof/derivation

Let , where Let . Here, and are the two bases we will be using for the logarithms. They cannot be 1, because the logarithm function is not well defined for the base of 1.[ citation needed ] The number will be what the logarithm is evaluating, so it must be a positive number. Since we will be dealing with the term quite frequently, we define it as a new variable: Let .

To more easily manipulate the expression, it can be rewritten as an exponential.

Applying to both sides of the equality,

Now, using the logarithm of a power property, which states that ,

Isolating , we get the following:

Resubstituting back into the equation,

This completes the proof that .

This formula has several consequences:


where is any permutation of the subscripts 1, ..., n. For example

Summation/subtraction

The following summation/subtraction rule is especially useful in probability theory when one is dealing with a sum of log-probabilities:

because
because

Note that the subtraction identity is not defined if , since the logarithm of zero is not defined. Also note that, when programming, and may have to be switched on the right hand side of the equations if to avoid losing the "1 +" due to rounding errors. Many programming languages have a specific log1p(x) function that calculates without underflow (when is small).

More generally:

Exponents

A useful identity involving exponents:

or more universally:

Other/resulting identities

Inequalities

Based on, [4] [5] and [6]

All are accurate around , but not for large numbers.

Calculus identities

Limits

The last limit is often summarized as "logarithms grow more slowly than any power or root of x".

Derivatives of logarithmic functions

Integral definition

Riemann Sum

for and is a sample point in each interval.

Series representation

The natural logarithm has a well-known Taylor series [7] expansion that converges for in the open-closed interval :

Within this interval, for , the series is conditionally convergent, and for all other values, it is absolutely convergent. For or , the series does not converge to . In these cases, different representations or methods must be used to evaluate the logarithm.

Harmonic number difference

It is not uncommon in advanced mathematics, particularly in analytic number theory and asymptotic analysis, to encounter expressions involving differences or ratios of harmonic numbers at scaled indices. [8] The identity involving the limiting difference between harmonic numbers at scaled indices and its relationship to the logarithmic function provides an intriguing example of how discrete sequences can asymptotically relate to continuous functions. This identity is expressed as [9]

which characterizes the behavior of harmonic numbers as they grow large. This approximation (which precisely equals in the limit) reflects how summation over increasing segments of the harmonic series exhibits integral properties, giving insight into the interplay between discrete and continuous analysis. It also illustrates how understanding the behavior of sums and series at large scales can lead to insightful conclusions about their properties. Here denotes the -th harmonic number, defined as

The harmonic numbers are a fundamental sequence in number theory and analysis, known for their logarithmic growth. This result leverages the fact that the sum of the inverses of integers (i.e., harmonic numbers) can be closely approximated by the natural logarithm function, plus a constant, especially when extended over large intervals. [10] [8] [11] As tends towards infinity, the difference between the harmonic numbers and converges to a non-zero value. This persistent non-zero difference, , precludes the possibility of the harmonic series approaching a finite limit, thus providing a clear mathematical articulation of its divergence. [12] [13] The technique of approximating sums by integrals (specifically using the integral test or by direct integral approximation) is fundamental in deriving such results. This specific identity can be a consequence of these approximations, considering:

Harmonic limit derivation

The limit explores the growth of the harmonic numbers when indices are multiplied by a scaling factor and then differenced. It specifically captures the sum from to :

This can be estimated using the integral test for convergence, or more directly by comparing it to the integral of from to :

As the window's lower bound begins at and the upper bound extends to , both of which tend toward infinity as , the summation window encompasses an increasingly vast portion of the smallest possible terms of the harmonic series (those with astronomically large denominators), creating a discrete sum that stretches towards infinity, which mirrors how continuous integrals accumulate value across an infinitesimally fine partitioning of the domain. In the limit, the interval is effectively from to where the onset implies this minimally discrete region.

Double series formula

The harmonic number difference formula for is an extension [9] of the classic, alternating identity of :

which can be generalized as the double series over the residues of :

where is the principle ideal generated by . Subtracting from each term (i.e., balancing each term with the modulus) reduces the magnitude of each term's contribution, ensuring convergence by controlling the series' tendency toward divergence as increases. For example:

This method leverages the fine differences between closely related terms to stabilize the series. The sum over all residues ensures that adjustments are uniformly applied across all possible offsets within each block of terms. This uniform distribution of the "correction" across different intervals defined by functions similarly to telescoping over a very large sequence. It helps to flatten out the discrepancies that might otherwise lead to divergent behavior in a straightforward harmonic series.

Deveci's Proof

A fundamental feature of the proof is the accumulation of the subtrahends into a unit fraction, that is, for , thus rather than , where the extrema of are if and otherwise, with the minimum of being implicit in the latter case due to the structural requirements of the proof. Since the cardinality of depends on the selection of one of two possible minima, the integral , as a set-theoretic procedure, is a function of the maximum (which remains consistent across both interpretations) plus , not the cardinality (which is ambiguous [14] [15] due to varying definitions of the minimum). Whereas the harmonic number difference computes the integral in a global sliding window, the double series, in parallel, computes the sum in a local sliding window—a shifting -tuple—over the harmonic series, advancing the window by positions to select the next -tuple, and offsetting each element of each tuple by relative to the window's absolute position. The sum corresponds to which scales without bound. The sum corresponds to the prefix trimmed from the series to establish the window's moving lower bound , and is the limit of the sliding window (the scaled, truncated [16] series):

Integrals of logarithmic functions

To remember higher integrals, it is convenient to define

where is the nth harmonic number:

Then

Approximating large numbers

The identities of logarithms can be used to approximate large numbers. Note that logb(a) + logb(c) = logb(ac), where a, b, and c are arbitrary constants. Suppose that one wants to approximate the 44th Mersenne prime, 232,582,6571. To get the base-10 logarithm, we would multiply 32,582,657 by log10(2), getting 9,808,357.09543 = 9,808,357 + 0.09543. We can then get 109,808,357× 100.09543 ≈ 1.25 × 109,808,357.

Similarly, factorials can be approximated by summing the logarithms of the terms.

Complex logarithm identities

The complex logarithm is the complex number analogue of the logarithm function. No single valued function on the complex plane can satisfy the normal rules for logarithms. However, a multivalued function can be defined which satisfies most of the identities. It is usual to consider this as a function defined on a Riemann surface. A single valued version, called the principal value of the logarithm, can be defined which is discontinuous on the negative x axis, and is equal to the multivalued version on a single branch cut.

Definitions

In what follows, a capital first letter is used for the principal value of functions, and the lower case version is used for the multivalued function. The single valued version of definitions and identities is always given first, followed by a separate section for the multiple valued versions.

The multiple valued version of log(z) is a set, but it is easier to write it without braces and using it in formulas follows obvious rules.

When k is any integer:

Constants

Principal value forms:

Multiple value forms, for any k an integer:

Summation

Principal value forms:

[17]
[17]

Multiple value forms:

Powers

A complex power of a complex number can have many possible values.

Principal value form:

Multiple value forms:

Where k1, k2 are any integers:

See also

Related Research Articles

<span class="mw-page-title-main">Exponential function</span> Mathematical function, denoted exp(x) or e^x

The exponential function is a mathematical function denoted by or . Unless otherwise specified, the term generally refers to the positive-valued function of a real variable, although it can be extended to the complex numbers or generalized to other mathematical objects like matrices or Lie algebras. The exponential function originated from the operation of taking powers of a number, but various modern definitions allow it to be rigorously extended to all real arguments , including irrational numbers. Its ubiquitous occurrence in pure and applied mathematics led mathematician Walter Rudin to consider the exponential function to be "the most important function in mathematics".

In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.

<span class="mw-page-title-main">Gamma function</span> Extension of the factorial function

In mathematics, the gamma function is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except the non-positive integers. For every positive integer n,

<span class="mw-page-title-main">Natural logarithm</span> Logarithm to the base of the mathematical constant e

The natural logarithm of a number is its logarithm to the base of the mathematical constant e, which is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is generally written as ln x, logex, or sometimes, if the base e is implicit, simply log x. Parentheses are sometimes added for clarity, giving ln(x), loge(x), or log(x). This is done particularly when the argument to the logarithm is not a single symbol, so as to prevent ambiguity.

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

<span class="mw-page-title-main">Stirling's approximation</span> Approximation for factorials

In mathematics, Stirling's approximation is an asymptotic approximation for factorials. It is a good approximation, leading to accurate results even for small values of . It is named after James Stirling, though a related but less precise result was first stated by Abraham de Moivre.

<span class="mw-page-title-main">Beta distribution</span> Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

<span class="mw-page-title-main">Harmonic number</span> Sum of the first n whole number reciprocals; 1/1 + 1/2 + 1/3 + ... + 1/n

In mathematics, the n-th harmonic number is the sum of the reciprocals of the first n natural numbers:

<span class="mw-page-title-main">Inverse trigonometric functions</span> Inverse functions of sin, cos, tan, etc.

In mathematics, the inverse trigonometric functions are the inverse functions of the trigonometric functions. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.

In mathematics, a Dirichlet series is any series of the form

<span class="mw-page-title-main">Digamma function</span> Mathematical function

In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:

<span class="mw-page-title-main">Integral test for convergence</span> Test for infinite series of monotonous terms for convergence

In mathematics, the integral test for convergence is a method used to test infinite series of monotonous terms for convergence. It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the Maclaurin–Cauchy test.

<span class="mw-page-title-main">Tetration</span> Arithmetic operation

In mathematics, tetration is an operation based on iterated, or repeated, exponentiation. There is no standard notation for tetration, though Knuth's up arrow notation and the left-exponent xb are common.

<span class="mw-page-title-main">Polylogarithm</span> Special mathematical function

In mathematics, the polylogarithm (also known as Jonquière's function, for Alfred Jonquière) is a special function Lis(z) of order s and argument z. Only for special values of s does the polylogarithm reduce to an elementary function such as the natural logarithm or a rational function. In quantum statistics, the polylogarithm function appears as the closed form of integrals of the Fermi–Dirac distribution and the Bose–Einstein distribution, and is also known as the Fermi–Dirac integral or the Bose–Einstein integral. In quantum electrodynamics, polylogarithms of positive integer order arise in the calculation of processes represented by higher-order Feynman diagrams.

In mathematics, the exponential function can be characterized in many ways. The following characterizations (definitions) are most common. This article discusses why each characterization makes sense, and why they are all equivalent to each other. As a special case of these considerations, it will be demonstrated that the three most common definitions for the mathematical constant e are equivalent to each other.

<span class="mw-page-title-main">Complex logarithm</span> Logarithm of a complex number

In mathematics, a complex logarithm is a generalization of the natural logarithm to nonzero complex numbers. The term refers to one of the following, which are strongly related:

<span class="mw-page-title-main">Coupon collector's problem</span> Problem in probability theory

In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: If each box of a brand of cereals contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: Given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as . For example, when n = 50 it takes about 225 trials on average to collect all 50 coupons.

<span class="mw-page-title-main">Conway–Maxwell–Poisson distribution</span> Probability distribution

In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.

The decimal value of the natural logarithm of 2 is approximately

References

  1. Weisstein, Eric W. "Logarithm". mathworld.wolfram.com. Retrieved 2020-08-29.
  2. "4.3 - Properties of Logarithms". people.richland.edu. Retrieved 2020-08-29.
  3. "Properties and Laws of Logarithms". courseware.cemc.uwaterloo.ca/8. Retrieved 2022-04-23.
  4. "Archived copy" (PDF). Archived from the original (PDF) on 2016-10-20. Retrieved 2016-12-20.{{cite web}}: CS1 maint: archived copy as title (link)
  5. http://www.lkozma.net/inequalities_cheat_sheet/ineq.pdf [ bare URL PDF ]
  6. http://downloads.hindawi.com/archive/2013/412958.pdf [ bare URL PDF ]
  7. Weisstein, Eric W. "Mercator Series". MathWorld--A Wolfram Web Resource. Retrieved 2024-04-24.
  8. 1 2 Flajolet, Philippe; Sedgewick, Robert (2009). Analytic Combinatorics. Cambridge University Press. p. 389. ISBN   978-0521898065. See page 117, and VI.8 definition of shifted harmonic numbers on page 389
  9. 1 2 Deveci, Sinan (2022). "On a Double Series Representation of the Natural Logarithm, the Asymptotic Behavior of Hölder Means, and an Elementary Estimate for the Prime Counting Function". arXiv: 2211.10751 [math.NT]. See Theorem 5.2. on pages 22 - 23
  10. Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). Concrete Mathematics: A Foundation for Computer Science. Addison-Wesley. p. 429. ISBN   0-201-55802-5.
  11. "Harmonic Number". Wolfram MathWorld. Retrieved 2024-04-24. See formula 13.
  12. Kifowit, Steven J. (2019). More Proofs of Divergence of the Harmonic Series (PDF) (Report). Prairie State College. Retrieved 2024-04-24. See Proofs 23 and 24 for details on the relationship between harmonic numbers and logarithmic functions.
  13. Bell, Jordan; Blåsjö, Viktor (2018). "Pietro Mengoli's 1650 Proof That the Harmonic Series Diverges". Mathematics Magazine. 91 (5): 341–347. doi:10.1080/0025570X.2018.1506656. hdl: 1874/407528 . JSTOR   48665556 . Retrieved 2024-04-24.
  14. Harremoës, Peter (2011). "Is Zero a Natural Number?". arXiv: 1102.0418 [math.HO]. A synopsis on the nature of 0 which frames the choice of minimum as the dichotomy between ordinals and cardinals.
  15. Barton, N. (2020). "Absence perception and the philosophy of zero". Synthese. 197 (9): 3823–3850. doi:10.1007/s11229-019-02220-x. PMC   7437648 . PMID   32848285. See section 3.1
  16. The shift is characteristic of the right Riemann sum employed to prevent the integral from degenerating into the harmonic series, thereby averting divergence. Here, functions analogously, serving to regulate the series. The successor operation signals the implicit inclusion of the modulus (the region omitted from ). The importance of this, from an axiomatic perspective, becomes evident when the residues of are formulated as , where is bootstrapped by to produce the residues of modulus . Consequently, represents a limiting value in this context.
  17. 1 2 Abramowitz, Milton (1965). Handbook of mathematical functions, with formulas, graphs, and mathematical tables. Irene A. Stegun. New York: Dover Publications. ISBN   0-486-61272-4. OCLC   429082.