This article may require cleanup to meet Wikipedia's quality standards. The specific problem is: See Talk:Horner's method#This Article is about Two Different Algorithms (November 2018) (Learn how and when to remove this template message) |

In mathematics and computer science, **Horner's method** (or **Horner's scheme**) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians.^{ [1] } After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials.

- Polynomial evaluation and long division
- Examples
- Efficiency
- Application to floating-point multiplication and division
- Other applications
- Polynomial root finding
- Example 2
- Divided difference of a polynomial
- History
- See also
- Notes
- References
- External links

The algorithm is based on **Horner's rule:**

This allows the evaluation of a polynomial of degree n with only multiplications and additions. This is optimal, since there are polynomials of degree n that cannot be evaluated with fewer arithmetic operations ^{ [2] }

Alternatively, **Horner's method** also refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of the Newton–Raphson method made more efficient for hand calculation by the application of Horner's Rule. It was widely used until computers came into general use around 1970.

Given the polynomial

where are constant coefficients, the problem is to evaluate the polynomial at a specific value of

For this, a new sequence of constants is defined recursively as follows:

Then is the value of .

To see why this works, the polynomial can be written in the form

Thus, by iteratively substituting the into the expression,

Now, it can be proven that;

This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of;

with b_{0} (which is equal to p(x_{0})) being the division's remainder, as is demonstrated by the examples below. if x_{0} is a root of p(x), then b_{0} = 0 (meaning the remainder is 0), which means you can factor p(x) with (x-x_{0}).

As to finding the consecutive b-values, you start with determining b_{n}, which is simply equal to a_{n}. You then work your way down to the other b's, using the formula;

till you arrive at b_{0}.

Evaluate for

We use synthetic division as follows:

x│_{0}x^{3}x^{2}x^{1}x3 │ 2 −6 2 −1 │ 6 0 6 └──────────────────────── 2 0 2 5^{0}

The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of the *x*-value (3 in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder of on division by is 5.

But by the polynomial remainder theorem, we know that the remainder is . Thus

In this example, if we can see that , the entries in the third row. So, synthetic division is based on Horner's method.

As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient of on division by . The remainder is 5. This makes Horner's method useful for polynomial long division.

Divide by :

2 │ 1 −6 11 −6 │ 2 −8 6 └──────────────────────── 1 −4 3 0

The quotient is .

Let and . Divide by using Horner's method.

0.5 │ 4 -6 0 3 -5 │ 2 -2 -1 1 └─────────────────────── 2 -2 -1 1 -2

The third row is the sum of the first two rows, divided by 2. Each entry in the second row is the product of 1 with the third-row entry to the left. The answer is

Evaluation using the monomial form of a degree-*n* polynomial requires at most *n* additions and (*n*^{2} + *n*)/2 multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. (This can be reduced to *n* additions and 2*n* − 1 multiplications by evaluating the powers of *x* iteratively.) If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately 2*n* times the number of bits of *x* (the evaluated polynomial has approximate magnitude *x ^{n}*, and one must also store

Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. Alexander Ostrowski proved in 1954 that the number of additions required is minimal.^{ [4] } Victor Pan proved in 1966 that the number of multiplications is minimal.^{ [5] } However, when *x* is a matrix, Horner's method is not optimal.

This assumes that the polynomial is evaluated in monomial form and no preconditioning of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then faster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree-*n* polynomial can be evaluated using only ⌊*n*/2⌋+2 multiplications and *n* additions.^{ [6] }

A disadvantage of Horner's rule is that all of the operations are sequentially dependent, so it is not possible to take advantage of instruction level parallelism on modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation.

If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows:

More generally, the summation can be broken into *k* parts:

where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allows *k*-way SIMD execution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although for floating-point calculations this require enabling (unsafe) reassociative math.

Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on a microcontroller with no hardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation) , and . Then, *x* (or *x* to some power) is repeatedly factored out. In this binary numeral system (base 2), , so powers of 2 are repeatedly factored out.

For example, to find the product of two numbers (0.15625) and *m*:

To find the product of two binary numbers *d* and *m*:

- 1. A register holding the intermediate result is initialized to
*d*. - 2. Begin with the least significant (rightmost) non-zero bit in
*m*.- 2b. Count (to the left) the number of bit positions to the next most significant non-zero bit. If there are no more-significant bits, then take the value of the current bit position.
- 2c. Using that value, perform a left-shift operation by that number of bits on the register holding the intermediate result

- 3. If all the non-zero bits were counted, then the intermediate result register now holds the final result. Otherwise, add d to the intermediate result, and continue in step 2 with the next most significant bit in
*m*.

In general, for a binary number with bit values () the product is

At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication or division by zero is not an issue, despite this implication in the factored equation:

The denominators all equal one (or the term is absent), so this reduces to

or equivalently (as consistent with the "method" described above)

In binary (base-2) math, multiplication by a power of 2 is merely a register shift operation. Thus, multiplying by 2 is calculated in base-2 by an arithmetic shift. The factor (2^{−1}) is a right arithmetic shift, a (0) results in no operation (since 2^{0} = 1 is the multiplicative identity element), and a (2^{1}) results in a left arithmetic shift. The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction.

The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the "canonical signed digit" (CSD) form is used) and uses only 20% of the code space.^{ [7] }

Horner's method can be used to convert between different positional numeral systems – in which case *x* is the base of the number system, and the *a*_{i} coefficients are the digits of the base-*x* representation of a given number – and can also be used if *x* is a matrix, in which case the gain in computational efficiency is even greater. In fact, when *x* is a matrix, further acceleration is possible which exploits the structure of matrix multiplication, and only instead of *n* multiplies are needed (at the expense of requiring more storage) using the 1973 method of Paterson and Stockmeyer.^{ [8] }

Using the long division algorithm in combination with Newton's method, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomial of degree with zeros make some initial guess such that . Now iterate the following two steps:

- Using Newton's method, find the largest zero of using the guess .
- Using Horner's method, divide out to obtain . Return to step 1 but use the polynomial and the initial guess .

These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials.^{ [9] }

Consider the polynomial

which can be expanded to

From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Next is divided by to obtain

which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by to obtain

which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtain

which is shown in green and found to have a zero at −3. This polynomial is further reduced to

which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducing and solving the linear equation. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found.

Horner's method can be modified to compute the divided difference Given the polynomial (as before)

proceed as follows^{ [10] }

At completion, we have

This computation of the divided difference is subject to less round-off error than evaluating and separately, particularly when . Substituting in this method gives , the derivative of .

Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation",^{ [12] } was read before the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823.^{ [12] } Horner's paper in Part II of *Philosophical Transactions of the Royal Society of London* for 1819 was warmly and expansively welcomed by a reviewer ^{[ permanent dead link ]} in the issue of *The Monthly Review: or, Literary Journal* for April, 1820; in comparison, a technical paper by Charles Babbage is dismissed curtly in this review. The sequence of reviews in *The Monthly Review* for September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller^{ [13] } showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820).

Unlike his English contemporaries, Horner drew on the Continental literature, notably the work of Arbogast. Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work of Paolo Ruffini.

Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to:

- Paolo Ruffini in 1809 (see Ruffini's rule)
^{ [14] }^{ [15] } - Isaac Newton in 1669
^{ [16] }^{ [17] } - the Chinese mathematician Zhu Shijie in the 14th century
^{ [15] } - the Chinese mathematician Qin Jiushao in his
*Mathematical Treatise in Nine Sections*in the 13th century - the Persian mathematician Sharaf al-Dīn al-Ṭūsī in the 12th century (the first to use that method in a general case of cubic equation)
^{ [18] } - the Chinese mathematician Jia Xian in the 11th century (Song dynasty)
*The Nine Chapters on the Mathematical Art*, a Chinese work of the Han dynasty (202 BC – 220 AD) edited by Liu Hui (fl. 3rd century).^{ [19] }

Qin Jiushao, in his *Shu Shu Jiu Zhang* (* Mathematical Treatise in Nine Sections *; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematician Jia Xian; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies. Yoshio Mikami in *Development of Mathematics in China and Japan* (Leipzig 1913) wrote:

"... who can deny the fact of Horner's illustrious process being used in China at least nearly six long centuries earlier than in Europe ... We of course don't intend in any way to ascribe Horner's invention to a Chinese origin, but the lapse of time sufficiently makes it not altogether impossible that the Europeans could have known of the Chinese method in a direct or indirect way."

^{ [20] }

Ulrich Libbrecht concluded: *It is obvious that this procedure is a Chinese invention ... the method was not known in India*. He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese.^{ [21] } The extraction of square and cube roots along similar lines is already discussed by Liu Hui in connection with Problems IV.16 and 22 in *Jiu Zhang Suan Shu*, while Wang Xiaotong in the 7th century supposes his readers can solve cubics by an approximation method described in his book Jigu Suanjing.

- Clenshaw algorithm to evaluate polynomials in Chebyshev form
- De Boor's algorithm to evaluate splines in B-spline form
- De Casteljau's algorithm to evaluate polynomials in Bézier form
- Estrin's scheme to facilitate parallelization on modern computer architectures
- Lill's method to approximate roots graphically
- Ruffini's rule and synthetic division to divide a polynomial by a binomial of the form x − r

- ↑ 600 years earlier, by the Chinese mathematician Qin Jiushao and 700 years earlier, by the Persian mathematician Sharaf al-Dīn al-Ṭūsī
- ↑ Pan 1966.
- ↑ Pankiewicz 1968.
- ↑ Ostrowski 1954.
- ↑ Pan 1966.
- ↑ Knuth 1997.
- ↑ Kripasagar 2008 , p. 62.
- ↑ Higham 2002 , Section 5.4.
- ↑ Kress 1991 , p. 112.
- ↑ Fateman & Kahan 2000
- ↑ Libbrecht 2005 , pp. 181–191.
- 1 2 Horner 1819.
- ↑ Fuller 1999 , pp. 29–51.
- ↑ Cajori 1911.
- 1 2 O'Connor, John J.; Robertson, Edmund F., "Horner's method",
*MacTutor History of Mathematics archive*, University of St Andrews . - ↑ Analysis Per Quantitatum Series, Fluctiones ac Differentias : Cum Enumeratione Linearum Tertii Ordinis, Londini. Ex Officina Pearsoniana. Anno MDCCXI, p. 10, 4th paragraph.
- ↑ Newton's collected papers, the edition 1779, in a footnote, vol. I, p. 270-271
- ↑ Berggren 1990 , pp. 304–309.
- ↑ Temple 1986 , p. 142.
- ↑ Mikami 1913, p. 77
- ↑ Libbrecht 2005 , p. 208.

In mathematics, the **determinant** is a scalar value that is a function of the entries of a square matrix. It allows characterizing some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible, and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants . The determinant of a matrix *A* is denoted det(*A*), det *A*, or |*A*|.

In mathematics and computer programming, **exponentiating by squaring** is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix. Some variants are commonly referred to as **square-and-multiply** algorithms or **binary exponentiation**. These can be of quite general use, for example in modular arithmetic or powering of matrices. For semigroups for which additive notation is commonly used, like elliptic curves used in cryptography, this method is also referred to as **double-and-add**.

**Multiplication** is one of the four elementary mathematical operations of arithmetic, with the other ones being addition, subtraction and division. The result of a multiplication operation is called a product.

In mathematics, a **polynomial** is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponentiation of variables. An example of a polynomial of a single indeterminate *x* is *x*^{2} − 4*x* + 7. An example in three variables is *x*^{3} + 2*xyz*^{2} − *yz* + 1.

**Reed–Solomon codes** are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which include consumer technologies such as Minidiscs, CDs, DVDs, Blu-ray discs, QR codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.

A **multiplication algorithm** is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are used. Efficient multiplication algorithms have existed since the advent of the decimal system.

In mathematics, **factorization** or **factoring** consists of writing a number or another mathematical object as a product of several *factors*, usually smaller or simpler objects of the same kind. For example, 3 × 5 is a factorization of the integer 15, and (*x* – 2)(*x* + 2) is a factorization of the polynomial *x*^{2} – 4.

In arithmetic and computer programming, the **extended Euclidean algorithm** is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers *a* and *b*, also the coefficients of Bézout's identity, which are integers *x* and *y* such that

In mathematics, particularly in linear algebra, **matrix multiplication** is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the **matrix product**, has the number of rows of the first and the number of columns of the second matrix. The product of matrices **A** and **B** is denoted as **AB**.

In numerical analysis, **polynomial interpolation** is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset.

In mathematics, especially in the field of algebra, a **polynomial ring** or **polynomial algebra** is a ring formed from the set of polynomials in one or more indeterminates with coefficients in another ring, often a field.

In mathematics, a **linear differential equation** is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

In mathematics, the **resultant** of two polynomials is a polynomial expression of their coefficients, which is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the **eliminant**.

**Methods of computing square roots** are numerical analysis algorithms for finding the principal, or non-negative, square root of a real number. Arithmetically, it means given S, a procedure for finding a number which when multiplied by itself, yields S; algebraically, it means a procedure for finding the non-negative root of the equation x^{2} - S = 0; geometrically, it means given the area of a square, a procedure for constructing a side of the square.

A **division algorithm** is an algorithm which, given two integers N and D, computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software.

In mathematics, **Graeffe's method** or **Dandelin–Lobachesky–Graeffe method** is an algorithm for finding all of the roots of a polynomial. It was developed independently by Germinal Pierre Dandelin in 1826 and Lobachevsky in 1834. In 1837 Karl Heinrich Gräffe also discovered the principal idea of the method. The method separates the roots of a polynomial by squaring them repeatedly. This squaring of the roots is done implicitly, that is, only working on the coefficients of the polynomial. Finally, Viète's formulas are used in order to approximate the roots.

In algebra, the **greatest common divisor** of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.

In linear algebra, **eigendecomposition** or sometimes **spectral decomposition** is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way.

In mathematics, a **matrix** is a rectangular *array* or *table* of numbers, symbols, or expressions, arranged in *rows* and *columns*. For example, the dimension of the matrix below is 2 × 3, because there are two rows and three columns:

In mathematics and computer science, **polynomial evaluation** refers to computation of the value of a polynomial when its indeterminates are substited for some values. In other words, evaluating the polynomial at consists of computing See also Polynomial ring § Polynomial evaluation

- Berggren, J. L. (1990). "Innovation and Tradition in Sharaf al-Din al-Tusi's Muadalat".
*Journal of the American Oriental Society*.**110**(2): 304–309. doi:10.2307/604533. JSTOR 604533. - Cajori, Florian (1911). "Horner's method of approximation anticipated by Ruffini".
*Bulletin of the American Mathematical Society*.**17**(8): 409–414. doi: 10.1090/s0002-9904-1911-02072-9 . Read before the Southwestern Section of the American Mathematical Society on November 26, 1910. - Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein10.1016/0315-0860(81)90069-0, Clifford (2009). "Introduction to Algorithms".
*Historia Mathematica*(3rd ed.). MIT Press.**8**(3): 277–318. doi: 10.1016/0315-0860(81)90069-0 . - Fateman, R. J.; Kahan, W. (2000). Improving exact integrals from symbolic algebra systems (PDF) (Report). PAM. University of California, Berkeley: Center for Pure and Applied Mathematics. Archived from the original (PDF) on 2017-08-14. Retrieved 2018-05-17.
- Fuller, A. T. (1999). "Horner versus Holdred: An Episode in the History of Root Computation".
*Historia Mathematica*.**26**: 29–51. doi:10.1006/hmat.1998.2214. - Higham, Nicholas (2002).
*Accuracy and Stability of Numerical Algorithms*. SIAM. ISBN 978-0-89871-521-7. - Holdred, T. (1820).
*A New Method of Solving Equations with Ease and Expedition; by which the True Value of the Unknown Quantity is Found Without Previous Reduction. With a Supplement, Containing Two Other Methods of Solving Equations, Derived from the Same Principle*(PDF). Richard Watts. Archived from the original (PDF) on 2014-01-06. Retrieved 2012-12-10.- Holdred's method is in the supplement following page numbered 45 (which is the 52nd page of the pdf version).

- Horner, William George (July 1819). "A new method of solving numerical equations of all orders, by continuous approximation" (PDF).
*Philosophical Transactions*. Royal Society of London.**109**: 308–335. doi: 10.1098/rstl.1819.0023 . JSTOR 107508. S2CID 186210512. Archived from the original (PDF) on 2017-03-28. Retrieved 2017-03-28.- Directly available online via the link, but also reprinted with appraisal in D.E. Smith:
*A Source Book in Mathematics*, McGraw-Hill, 1929; Dover reprint, 2 vols, 1959.

- Directly available online via the link, but also reprinted with appraisal in D.E. Smith:
- Knuth, Donald (1997).
*The Art of Computer Programming*. Vol. 2: Seminumerical Algorithms (3rd ed.). Addison-Wesley. pp. 486–488 in section 4.6.4. ISBN 978-0-201-89684-8.`|volume=`

has extra text (help) - Kress, Rainer (1991).
*Numerical Analysis*. Springer. - Kripasagar, Venkat (March 2008). "Efficient Micro Mathematics – Multiplication and Division Techniques for MCUs".
*Circuit Cellar Magazine*(212). - Libbrecht, Ulrich (2005). "Chapter 13".
*Chinese Mathematics in the Thirteenth Century*(2nd ed.). Dover. ISBN 978-0-486-44619-6. Archived from the original on 2017-06-06. Retrieved 2016-08-23. - Mikami, Yoshio (1913). "Chapter 11. Ch'in Chiu-Shao".
*The Development of Mathematics in China and Japan*(1st ed.). Chelsea Publishing Co reprint. pp. 74–77. - Ostrowski, Alexander M. (1954). "On two problems in abstract algebra connected with Horner's rule".
*Studies in Mathematics and Mechanics presented to Richard von Mises*. Academic Press. pp. 40–48. ISBN 978-1-4832-3272-0. - Pan, Y. Ja (1966). "On means of calculating values of polynomials".
*Russian Math. Surveys*.**21**: 105–136. doi:10.1070/rm1966v021n01abeh004147. - Pankiewicz, W. (1968). "Algorithm 337: calculation of a polynomial and its derivative values by Horner scheme".
*Communications of the ACM*. ACM.**11**(9): 633. doi:10.1145/364063.364089. S2CID 52859619. - Spiegel, Murray R. (1956).
*Schaum's Outline of Theory and Problems of College Algebra*. McGraw-Hill. - Temple, Robert (1986).
*The Genius of China: 3,000 Years of Science, Discovery, and Invention*. Simon and Schuster. ISBN 978-0-671-62028-8. - Whittaker, E.T.; Robinson, G. (1924).
*The Calculus of Observations*. London: Blackie. - Wylie, Alexander (1897). "Jottings on the Science of Chinese Arithmetic".
*Chinese Researches*. Shanghai. pp. 159–194.- Reprinted from issues of
*The North China Herald*(1852).

- Reprinted from issues of

The Wikibook Algorithm Implementation has a page on the topic of: Polynomial evaluation |

- "Horner scheme",
*Encyclopedia of Mathematics*, EMS Press, 2001 [1994] - Qiu Jin-Shao, Shu Shu Jiu Zhang (Cong Shu Ji Cheng ed.)
- For more on the root-finding application see

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.