Author | William H. Press, Saul A. Teukolsky, William T. Vetterling and Brian P. Flannery |
---|---|
Language | English |
Discipline | Numerical analysis |
Publisher | Cambridge University Press |
Website | numerical |
Numerical Recipes is the generic title of a series of books on algorithms and numerical analysis by William H. Press, Saul A. Teukolsky, William T. Vetterling and Brian P. Flannery. In various editions, the books have been in print since 1986. The most recent edition was published in 2007.
The Numerical Recipes books cover a range of topics that include both classical numerical analysis (interpolation, integration, linear algebra, differential equations, and so on), signal processing (Fourier methods, filtering), statistical treatment of data, and a few topics in machine learning (hidden Markov model, support vector machines). The writing style is accessible and has an informal tone. The emphasis is on understanding the underlying basics of techniques, not on the refinements that may, in practice, be needed to achieve optimal performance and reliability. Few results are proved with any degree of rigor, although the ideas behind proofs are often sketched, and references are given. Importantly, virtually all methods that are discussed are also implemented in a programming language, with the code printed in the book. Each version is keyed to a specific language.
According to the publisher, Cambridge University Press, the Numerical Recipes books are historically the all-time best-selling books on scientific programming methods. In recent years, Numerical Recipes books have been cited in the scientific literature more than 3000 times per year according to ISI Web of Knowledge (e.g., 3962 times in the year 2008). [1] And as of the end of 2017, the book had over 44000 citations on Google Scholar. [2]
The first publication was in 1986 with the title,”Numerical Recipes, The Art of Scientific Computing”, containing code in both Fortran and Pascal; an accompanying book, “Numerical Recipes Example Book (Pascal)” was first published in 1985. (A preface note in “Examples" mentions that the main book was also published in 1985, but the official note in that book says 1986.) Supplemental editions followed with code in Pascal, BASIC, and C. Numerical Recipes took, from the start, an opinionated editorial position at odds with the conventional wisdom of the numerical analysis community:
If there is a single dominant theme in this book, it is that practical methods of numerical computation can be simultaneously efficient, clever, and — important — clear. The alternative viewpoint, that efficient computational methods must necessarily be so arcane and complex as to be useful only in "black box" form, we firmly reject. [3]
However, as it turned out, the 1980s were fertile years for the "black box" side, yielding important libraries such as BLAS and LAPACK, and integrated environments like MATLAB and Mathematica. By the early 1990s, when Second Edition versions of Numerical Recipes (with code in C, Fortran-77, and Fortran-90) were published, it was clear that the constituency for Numerical Recipes was by no means the majority of scientists doing computation, but only that slice that lived between the more mathematical numerical analysts and the larger community using integrated environments. The Second Edition versions occupied a stable role in this niche environment. [4]
By the mid-2000s, the practice of scientific computing had been radically altered by the mature Internet and Web. Recognizing that their Numerical Recipes books were increasingly valued more for their explanatory text than for their code examples, the authors significantly expanded the scope of the book, and significantly rewrote a large part of the text. They continued to include code, still printed in the book, now in C++, for every method discussed. [5] The Third Edition was also released as an electronic book, [6] eventually made available on the Web for free (with nags) or by paid or institutional subscription (with faster, full access and no nags).
In 2015 Numerical Recipes sold its historic two-letter domain name nr.com [7] and became numerical.recipes
instead.
Numerical Recipes is a single volume that covers very broad range of algorithms. Unfortunately that format skewed the choice of algorithms towards simpler and shorter early algorithms which were not as accurate, efficient or stable as later more complex algorithms. [8] [9] The first edition had also some minor bugs, which were fixed in later editions; however according to the authors for years they were encountering on the internet rumors that Numerical Recipes is "full of bugs". They attributed this to people using outdated versions of the code, bugs in other parts of the code and misuse of routines which require some understanding to use correctly. [10]
The rebuttal does not, however, cover criticisms regarding lack of mentions to code limitations, boundary conditions, and more modern algorithms, another theme in Snyder's comment compilation. [9] A precision issue in Bessel functions has persisted to the third edition according to Pavel Holoborodko. [8]
Despite criticism by numerical analysts, engineers and scientists generally find the book conveniently broad in scope. [9] Norman Gray concurs in the following quote: [11]
Numerical Recipes [nr] does not claim to be a numerical analysis textbook, and it makes a point of noting that its authors are (astro-)physicists and engineers rather than analysts, and so share the motivations and impatience of the book's intended audience. The declared premise of the NR authors is that you will come to grief one way or the other if you use numerical routines you do not understand. They attempt to give you enough mathematical detail that you understand the routines they present, in enough depth that you can diagnose problems when they occur, and make more sophisticated choices about replacements when the NR routines run out of steam. Problems will occur because [...]
The code listings are copyrighted and commercially licensed by the Numerical Recipes authors. [12] A license to use the code is given with the purchase of a book, but the terms of use are highly restrictive. [13] For example, programmers need to make sure NR code cannot be extracted from their finished programs and used – a difficult requirement with dubious enforceability. [14]
However, Numerical Recipes does include the following statement regarding copyrights on computer programs:
Copyright does not protect ideas, but only the expression of those ideas in a particular form. In the case of a computer program, the ideas consist of the program's methodology and algorithm, including the necessary sequence of steps adopted by the programmer. The expression of those ideas is the program source code ... If you analyze the ideas contained in a program, and then express those ideas in your own completely different implementation, then that new program implementation belongs to you. [6]
One early motivation for the GNU Scientific Library was that a free library was needed as a substitute for Numerical Recipes. [15]
Another line of criticism centers on the coding style of the books, which strike some modern readers as "Fortran-ish", though written in contemporary, object-oriented C++. [15] The authors have defended their very terse coding style as necessary to the format of the book because of space limitations and for readability. [4]
The books differ by edition (1st, 2nd, and 3rd) and by the computer language in which the code is given.
The books are published by Cambridge University Press.
In mathematics, the Legendre forms of elliptic integrals are a canonical set of three elliptic integrals to which all others may be reduced. Legendre chose the name elliptic integrals because the second kind gives the arc length of an ellipse of unit semi-major axis and eccentricity .
Interior-point methods are a certain class of algorithms that solve linear and nonlinear convex optimization problems.
In mathematics, the Lanczos approximation is a method for computing the gamma function numerically, published by Cornelius Lanczos in 1964. It is a practical alternative to the more popular Stirling's approximation for calculating the gamma function with fixed precision.
In the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms (DFTs) into a larger DFT, or vice versa. The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described below. The earliest occurrence in print of the term is thought to be in a 1969 MIT technical report. The same structure can also be found in the Viterbi algorithm, used for finding the most likely sequence of hidden states.
Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the subject of computational science. The quantity is also called macheps and it has the symbols Greek epsilon .
In mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville in 1934. Given n + 1 points, there is a unique polynomial of degree ≤ n which goes through the given points. Neville's algorithm evaluates this polynomial.
Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. The "full" Newton's method requires the Jacobian in order to search for zeros, or the Hessian for finding extrema. Some iterative methods that reduce to Newton's method, such as SLSQP, may be considered quasi-Newtonian.
Adaptive quadrature is a numerical integration method in which the integral of a function is approximated using static quadrature rules on adaptively refined subintervals of the region of integration. Generally, adaptive algorithms are just as efficient and effective as traditional algorithms for "well behaved" integrands, but are also effective for "badly behaved" integrands for which traditional algorithms may fail.
Tanh-sinh quadrature is a method for numerical integration introduced by Hidetoshi Takahashi and Masatake Mori in 1974. It is especially applied where singularities or infinite derivatives exist at one or both endpoints.
In numerical mathematics, relaxation methods are iterative methods for solving systems of equations, including nonlinear systems.
In numerical analysis, Ridders' method is a root-finding algorithm based on the false position method and the use of an exponential function to successively approximate a root of a continuous function . The method is due to C. Ridders.
In numerical analysis, predictor–corrector methods belong to a class of algorithms designed to integrate ordinary differential equations – to find an unknown function that satisfies a given differential equation. All such algorithms proceed in two steps:
In numerical analysis, the Bulirsch–Stoer algorithm is a method for the numerical solution of ordinary differential equations which combines three powerful ideas: Richardson extrapolation, the use of rational function extrapolation in Richardson-type applications, and the modified midpoint method, to obtain numerical solutions to ordinary differential equations (ODEs) with high accuracy and comparatively little computational effort. It is named after Roland Bulirsch and Josef Stoer. It is sometimes called the Gragg–Bulirsch–Stoer (GBS) algorithm because of the importance of a result about the error function of the modified midpoint method, due to William B. Gragg.
Rosenbrock methods refers to either of two distinct ideas in numerical computation, both named for Howard H. Rosenbrock.
Powell's method, strictly Powell's conjugate direction method, is an algorithm proposed by Michael J. D. Powell for finding a local minimum of a function. The function need not be differentiable, and no derivatives are taken.
In fluid mechanics, meteorology and oceanography, a trajectory traces the motion of a single point, often called a parcel, in the flow.
The following is a timeline of numerical analysis after 1945, and deals with developments after the invention of the modern electronic computer, which began during Second World War. For a fuller history of the subject before this period, see timeline and history of mathematics.
Cyclic reduction is a numerical method for solving large linear systems by repeatedly splitting the problem. Each step eliminates even or odd rows and columns of a matrix and remains in a similar form. The elimination step is relatively expensive but splitting the problem allows parallel computation.
In mathematics, Lentz's algorithm is an algorithm to evaluate continued fractions and compute tables of spherical Bessel functions.
Provably fairness is a technology-based algorithm that can be analyzed and tested for fairness. Provably fair technology is used mainly in online cryptocurrency-based gambling. The main purpose of this technology is to ensure that online casinos do not cheat their customers.
Buy the book if you feel like it, learn from it, but use a library like the GNU Scientific Library instead. Especially if you ever want other people to use your work. The NR license is the RIAA of the scientific community.