Smoothstep

Last updated
A plot of the smoothstep(x) and smootherstep(x) functions, using 0 as the left edge and 1 as the right edge Smoothstep and Smootherstep.svg
A plot of the smoothstep(x) and smootherstep(x) functions, using 0 as the left edge and 1 as the right edge

Smoothstep is a family of sigmoid-like interpolation and clamping functions commonly used in computer graphics, [1] [2] video game engines, [3] and machine learning. [4]

Contents

The function depends on three parameters, the input x, the "left edge" and the "right edge", with the left edge being assumed smaller than the right edge. The function receives a real number x as an argument and returns 0 if x is less than or equal to the left edge, 1 if x is greater than or equal to the right edge, and smoothly interpolates, using a Hermite polynomial, between 0 and 1 otherwise. The gradient of the smoothstep function is zero at both edges. This is convenient for creating a sequence of transitions using smoothstep to interpolate each segment as an alternative to using more sophisticated or expensive interpolation techniques.

In HLSL and GLSL, smoothstep implements the , the cubic Hermite interpolation after clamping:

Assuming that the left edge is 0, the right edge is 1, with the transition between edges taking place where 0 ≤ x ≤ 1.

A modified C/C++ example implementation provided by AMD [5] follows.

floatsmoothstep(floatedge0,floatedge1,floatx){// Scale, and clamp x to 0..1 rangex=clamp((x-edge0)/(edge1-edge0));returnx*x*(3.0f-2.0f*x);}floatclamp(floatx,floatlowerlimit=0.0f,floatupperlimit=1.0f){if(x<lowerlimit)returnlowerlimit;if(x>upperlimit)returnupperlimit;returnx;}

The general form for smoothstep, again assuming the left edge is 0 and right edge is 1, is

is identical to the clamping function:

The characteristic S-shaped sigmoid curve is obtained with only for integers n ≥ 1. The order of the polynomial in the general smoothstep is 2n + 1. With n = 1, the slopes or first derivatives of the smoothstep are equal to zero at the left and right edge (x = 0 and x = 1), where the curve is appended to the constant or saturated levels. With higher integer n, the second and higher derivatives are zero at the edges, making the polynomial functions as flat as possible and the splice to the limit values of 0 or 1 more seamless.

Variations

Ken Perlin suggested [6] an improved version of the commonly used first-order smoothstep function, equivalent to the second order of its general form. It has zero 1st- and 2nd-order derivatives at x = 0 and x = 1:

C/C++ reference implementation:

floatsmootherstep(floatedge0,floatedge1,floatx){// Scale, and clamp x to 0..1 rangex=clamp((x-edge0)/(edge1-edge0));returnx*x*x*(x*(6.0f*x-15.0f)+10.0f);}floatclamp(floatx,floatlowerlimit=0.0f,floatupperlimit=1.0f){if(x<lowerlimit)returnlowerlimit;if(x>upperlimit)returnupperlimit;returnx;}

Origin

3rd-order equation

Starting with a generic third-order polynomial function and its first derivative:

Applying the desired values for the function at both endpoints:

Applying the desired values for the first derivative of the function at both endpoints:

Solving the system of 4 unknowns formed by the last 4 equations result in the values of the polynomial coefficients:

This results in the third-order "smoothstep" function:

5th-order equation

Starting with a generic fifth-order polynomial function, its first derivative and its second derivative:

Applying the desired values for the function at both endpoints:

Applying the desired values for the first derivative of the function at both endpoints:

Applying the desired values for the second derivative of the function at both endpoints:

Solving the system of 6 unknowns formed by the last 6 equations result in the values of the polynomial coefficients:

This results in the fifth-order "smootherstep" function:

7th-order equation

Applying similar techniques, the 7th-order equation is found to be:

Generalization to higher-order equations

Smoothstep polynomials are generalized, with 0 ≤ x ≤ 1 as

where N determines the order of the resulting polynomial function, which is 2N + 1. The first seven smoothstep polynomials, with 0 ≤ x ≤ 1, are

The differential of is

It can be shown that the smoothstep polynomials that transition from 0 to 1 when x transitions from 0 to 1 can be simply mapped to odd-symmetry polynomials

where

and

The argument of RN(x) is −1 ≤ x ≤ 1 and is appended to the constant −1 on the left and +1 at the right.

An implementation of in Javascript: [7]

// Generalized smoothstepfunctiongeneralSmoothStep(N,x){x=clamp(x,0,1);// x must be equal to or between 0 and 1varresult=0;for(varn=0;n<=N;++n)result+=pascalTriangle(-N-1,n)*pascalTriangle(2*N+1,N-n)*Math.pow(x,N+n+1);returnresult;}// Returns binomial coefficient without explicit use of factorials,// which can't be used with negative integersfunctionpascalTriangle(a,b){varresult=1;for(vari=0;i<b;++i)result*=(a-i)/(i+1);returnresult;}functionclamp(x,lowerlimit,upperlimit){if(x<lowerlimit)x=lowerlimit;if(x>upperlimit)x=upperlimit;returnx;}

Inverse Smoothstep

The inverse of smoothstep() can be useful when doing certain operations in computer graphics when its effect needs to be reversed or compensated for. In the case of the 3rd-order equation there exists an analytical solution for the inverse, which is:

This arises as the inverse of , whose Maclaurin series terminates at , meaning and express the same function. The series expansion of the inverse, on the other hand, does not terminate.

In GLSL:

floatinverse_smoothstep(floatx){return0.5-sin(asin(1.0-2.0*x)/3.0);}

Related Research Articles

<span class="mw-page-title-main">Binomial distribution</span> Probability distribution

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success or failure. A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.

<span class="mw-page-title-main">Binomial coefficient</span> Number of subsets of a given size

In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers nk ≥ 0 and is written It is the coefficient of the xk term in the polynomial expansion of the binomial power (1 + x)n; this coefficient can be computed by the multiplicative formula

In mathematics, the Bernoulli numbersBn are a sequence of rational numbers which occur frequently in analysis. The Bernoulli numbers appear in the Taylor series expansions of the tangent and hyperbolic tangent functions, in Faulhaber's formula for the sum of m-th powers of the first n positive integers, in the Euler–Maclaurin formula, and in expressions for certain values of the Riemann zeta function.

In mathematics, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers among the positive integers. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée Poussin in 1896 using ideas introduced by Bernhard Riemann.

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without computing them. More precisely, it is a polynomial function of the coefficients of the original polynomial. The discriminant is widely used in polynomial factoring, number theory, and algebraic geometry.

<span class="mw-page-title-main">Legendre polynomials</span> System of complete and orthogonal polynomials

In mathematics, Legendre polynomials, named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a vast number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications.

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation; it is indeed derived using the product rule.

In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers.

<span class="mw-page-title-main">Chebyshev polynomials</span> Polynomial sequence

The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as and . They can be defined in several equivalent ways, one of which starts with trigonometric functions:

In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence.

<span class="mw-page-title-main">Bernstein polynomial</span> Type of polynomial used in Numerical Analysis

In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial expressed as a linear combination of Bernstein basis polynomials. The idea is named after mathematician Sergei Natanovich Bernstein.

<span class="mw-page-title-main">Inverse trigonometric functions</span> Inverse functions of the trigonometric functions

In mathematics, the inverse trigonometric functions are the inverse functions of the trigonometric functions. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.

<span class="mw-page-title-main">Perlin noise</span> Type of gradient noise in computer graphics

Perlin noise is a type of gradient noise developed by Ken Perlin in 1983. It has many uses, including but not limited to: procedurally generating terrain, applying pseudo-random changes to a variable, and assisting in the creation of image textures. It is most commonly implemented in two, three, or four dimensions, but can be defined for any number of dimensions.

<span class="mw-page-title-main">Spline (mathematics)</span> Mathematical function defined piecewise by polynomials

In mathematics, a spline is a function defined piecewise by polynomials. In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low degree polynomials, while avoiding Runge's phenomenon for higher degrees.

<span class="mw-page-title-main">Zernike polynomials</span> Polynomial sequence

In mathematics, the Zernike polynomials are a sequence of polynomials that are orthogonal on the unit disk. Named after optical physicist Frits Zernike, laureate of the 1953 Nobel Prize in Physics and the inventor of phase-contrast microscopy, they play important roles in various optics branches such as beam optics and imaging.

In mathematics, especially in combinatorics, Stirling numbers of the first kind arise in the study of permutations. In particular, the Stirling numbers of the first kind count permutations according to their number of cycles.

<span class="mw-page-title-main">Lemniscate elliptic functions</span> Mathematical functions

In mathematics, the lemniscate elliptic functions are elliptic functions related to the arc length of the lemniscate of Bernoulli. They were first studied by Giulio Fagnano in 1718 and later by Leonhard Euler and Carl Friedrich Gauss, among others.

In mathematics, a transformation of a sequence's generating function provides a method of converting the generating function for one sequence into a generating function enumerating another. These transformations typically involve integral formulas applied to a sequence generating function or weighted sums over the higher-order derivatives of these functions.

The Bernoulli polynomials of the second kindψn(x), also known as the Fontana-Bessel polynomials, are the polynomials defined by the following generating function:

References

  1. Smoothstep at Microsoft Developer Network.
  2. GLSL Language Specification, Version 1.40.
  3. Unity game engine SmoothStep documentation.
  4. Hazimeh, Hussein; Ponomareva, Natalia; Mol, Petros; Tan, Zhenyu; Mazumder, Rahul (2020). The Tree Ensemble Layer: Differentiability meets Conditional Computation (PDF). International Conference on Machine Learning. PMLR.
  5. Natalya Tatarchuk (2003). "Advanced Real-Time Shader Techniques". AMD. p. 94. Retrieved 2022-04-16.
  6. Texturing and Modeling, Third Edition: A Procedural Approach.
  7. General smoothstep equation.