This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
Xiaolin Wu's line algorithm is an algorithm for line antialiasing.
Xiaolin Wu's line algorithm was presented in the article "An Efficient Antialiasing Technique" in the July 1991 issue of Computer Graphics , as well as in the article "Fast Antialiasing" in the June 1992 issue of Dr. Dobb's Journal .
Bresenham's algorithm draws lines extremely quickly, but it does not perform anti-aliasing. In addition, it cannot handle any cases where the line endpoints do not lie exactly on integer points of the pixel grid. A naive approach to anti-aliasing the line would take an extremely long time. Wu's algorithm is comparatively fast, but is still slower than Bresenham's algorithm. The algorithm consists of drawing pairs of pixels straddling the line, each coloured according to its distance from the line. Pixels at the line ends are handled separately. Lines less than one pixel long are handled as a special case.
An extension to the algorithm for circle drawing was presented by Xiaolin Wu in the book Graphics Gems II. Just as the line drawing algorithm is a replacement for Bresenham's line drawing algorithm, the circle drawing algorithm is a replacement for Bresenham's circle drawing algorithm.
functionplot(x,y,c)isplotthepixelat(x,y)withbrightnessc(where0≤c≤1)// integer part of xfunctionipart(x)isreturnfloor(x)functionround(x)isreturnipart(x+0.5)// fractional part of xfunctionfpart(x)isreturnx-ipart(x)functionrfpart(x)isreturn1-fpart(x)functiondrawLine(x0,y0,x1,y1)isbooleansteep:=abs(y1-y0)>abs(x1-x0)ifsteepthenswap(x0,y0)swap(x1,y1)endififx0>x1thenswap(x0,x1)swap(y0,y1)endifdx:=x1-x0dy:=y1-y0ifdx==0.0thengradient:=1.0elsegradient:=dy/dxendif// handle first endpointxend:=round(x0)yend:=y0+gradient*(xend-x0)xgap:=rfpart(x0+0.5)xpxl1:=xend// this will be used in the main loopypxl1:=ipart(yend)ifsteepthenplot(ypxl1,xpxl1,rfpart(yend)*xgap)plot(ypxl1+1,xpxl1,fpart(yend)*xgap)elseplot(xpxl1,ypxl1,rfpart(yend)*xgap)plot(xpxl1,ypxl1+1,fpart(yend)*xgap)endifintery:=yend+gradient// first y-intersection for the main loop// handle second endpointxend:=round(x1)yend:=y1+gradient*(xend-x1)xgap:=fpart(x1+0.5)xpxl2:=xend//this will be used in the main loopypxl2:=ipart(yend)ifsteepthenplot(ypxl2,xpxl2,rfpart(yend)*xgap)plot(ypxl2+1,xpxl2,fpart(yend)*xgap)elseplot(xpxl2,ypxl2,rfpart(yend)*xgap)plot(xpxl2,ypxl2+1,fpart(yend)*xgap)endif// main loopifsteepthenforxfromxpxl1+1toxpxl2-1dobeginplot(ipart(intery),x,rfpart(intery))plot(ipart(intery)+1,x,fpart(intery))intery:=intery+gradientendelseforxfromxpxl1+1toxpxl2-1dobeginplot(x,ipart(intery),rfpart(intery))plot(x,ipart(intery)+1,fpart(intery))intery:=intery+gradientendendifendfunction
In computer graphics, rasterisation or rasterization is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image. The rasterized image may then be displayed on a computer display, video display or printer, or stored in a bitmap file format. Rasterization may refer to the technique of drawing 3D models, or to the conversion of 2D rendering primitives, such as polygons and line segments, into a rasterized format.
Bresenham's line algorithm is a line drawing algorithm that determines the points of an n-dimensional raster that should be selected in order to form a close approximation to a straight line between two points. It is commonly used to draw line primitives in a bitmap image, as it uses only integer addition, subtraction, and bit shifting, all of which are very cheap operations in historically common computer architectures. It is an incremental error algorithm, and one of the earliest algorithms developed in the field of computer graphics. An extension to the original algorithm called the midpoint circle algorithm may be used for drawing circles.
In computer graphics, a line drawing algorithm is an algorithm for approximating a line segment on discrete graphical media, such as pixel-based displays and printers. On such media, line drawing requires an approximation. Basic algorithms rasterize lines in one color. A better representation with multiple color gradations requires an advanced process, spatial anti-aliasing.
In mathematics, linear interpolation is a method of curve fitting using linear polynomials to construct new data points within the range of a discrete set of known data points.
In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset.
In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method.
Perlin noise is a type of gradient noise developed by Ken Perlin in 1983. It has many uses, including but not limited to: procedurally generating terrain, applying pseudo-random changes to a variable, and assisting in the creation of image textures. It is most commonly implemented in two, three, or four dimensions, but can be defined for any number of dimensions.
In the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. That is, instead of fitting a single, high-degree polynomial to all of the values at once, spline interpolation fits low-degree polynomials to small subsets of the values, for example, fitting nine cubic polynomials between each of the pairs of ten points, instead of fitting a single degree-nine polynomial to all of them. Spline interpolation is often preferred over polynomial interpolation because the interpolation error can be made small even when using low-degree polynomials for the spline. Spline interpolation also avoids the problem of Runge's phenomenon, in which oscillation can occur between points when interpolating using high-degree polynomials.
In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.
In computer graphics, the Liang–Barsky algorithm is a line clipping algorithm. The Liang–Barsky algorithm uses the parametric equation of a line and inequalities describing the range of the clipping window to determine the intersections between the line and the clip window. With these intersections it knows which portion of the line should be drawn. So this algorithm is significantly more efficient than Cohen–Sutherland. The idea of the Liang–Barsky clipping algorithm is to do as much testing as possible before computing line intersections.
In the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms (DFTs) into a larger DFT, or vice versa. The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described below. The earliest occurrence in print of the term is thought to be in a 1969 MIT technical report. The same structure can also be found in the Viterbi algorithm, used for finding the most likely sequence of hidden states.
In computer graphics, the Cohen–Sutherland algorithm is an algorithm used for line clipping. The algorithm divides a two-dimensional space into 9 regions and then efficiently determines the lines and portions of lines that are visible in the central region of interest.
A binary moment diagram (BMD) is a generalization of the binary decision diagram (BDD) to linear functions over domains such as booleans (like BDDs), but also to integers or to real numbers.
The Karatsuba algorithm is a fast multiplication algorithm. It was discovered by Anatoly Karatsuba in 1960 and published in 1962. It is a divide-and-conquer algorithm that reduces the multiplication of two n-digit numbers to three multiplications of n/2-digit numbers and, by repeating this reduction, to at most single-digit multiplications. It is therefore asymptotically faster than the traditional algorithm, which performs single-digit products.
Simplicial continuation, or piecewise linear continuation, is a one-parameter continuation method which is well suited to small to medium embedding spaces. The algorithm has been generalized to compute higher-dimensional manifolds by and.
In computer graphics, a digital differential analyzer (DDA) is hardware or software used for interpolation of variables over an interval between start and end point. DDAs are used for rasterization of lines, triangles and polygons. They can be extended to non linear functions, such as perspective correct texture mapping, quadratic curves, and traversing voxels.
In the field of mathematical analysis, an interpolation space is a space which lies "in between" two other Banach spaces. The main applications are in Sobolev spaces, where spaces of functions that have a noninteger number of derivatives are interpolated from the spaces of functions with integer number of derivatives.
A summed-area table is a data structure and algorithm for quickly and efficiently generating the sum of values in a rectangular subset of a grid. In the image processing domain, it is also known as an integral image. It was introduced to computer graphics in 1984 by Frank Crow for use with mipmaps. In computer vision it was popularized by Lewis and then given the name "integral image" and prominently used within the Viola–Jones object detection framework in 2001. Historically, this principle is very well known in the study of multi-dimensional probability distribution functions, namely in computing 2D probabilities from the respective cumulative distribution functions.
Theta* is an any-angle path planning algorithm that is based on the A* search algorithm. It can find near-optimal paths with run times comparable to those of A*.
In computability theory, Bekić's theorem or Bekić's lemma is a theorem about fixed-points which allows splitting a mutual recursion into recursions on one variable at a time. It was created by Austrian Hans Bekić (1936-1982) in 1969, and published posthumously in a book by Cliff Jones in 1984.