Procedural parameter

Last updated

In computing, a procedural parameter is a parameter of a procedure that is itself a procedure.

Contents

This concept is an extremely powerful and versatile programming tool, because it allows programmers to modify certain steps of a library procedure in arbitrarily complicated ways, without having to understand or modify the code of that procedure.

This tool is particularly effective and convenient in languages that support local function definitions, such as Pascal and the modern GNU dialect of C. It is even more so when function closures are available. The same functionality (and more) is provided by objects in object oriented programming languages, but at a significantly higher cost.

Procedural parameters are somewhat related to the concepts of first-class function and anonymous function, but is distinct from them. These two concepts have more to do with how functions are defined, rather than how they are used.

Basic concept

In most languages that provide this feature, a procedural parameter f of a subroutine P can be called inside the body of P as if it were an ordinary procedure:

procedureP(f):     returnf(6,3) * f(2,1)

When calling the subroutine P, one must give it one argument, that must be some previously defined function compatible with the way P uses its parameter f. For example, if we define

procedureplus(x, y):     returnx + y

then we may call P (plus), and the result will be plus(6,3) * plus(2,1) = (6 + 3)*(2 + 1) = 27. On the other hand, if we define

procedurequot(u, v):     returnu/v

then the call P (quot) will return quot(6,3)*quot(2,1) = (6/3)*(2/1) = 4. Finally, if we define

procedureevil(z)     return z + 100

then the call P (evil) will not make much sense, and may be flagged as an error.

Syntactic details

Some programming languages that have this feature may allow or require a complete type declaration for each procedural parameter f, including the number and type of its arguments, and the type of its result, if any. For example, in the C programming language the example above could be written as

intP(int(*f)(inta,intb)){returnf(6,3)*f(2,1);}

In principle, the actual function actf that is passed as argument when P is called must be type-compatible with the declared type of the procedure parameter f. This usually means that actf and f must return the same type of result, must have the same number of arguments, and corresponding arguments must have the same type. The names of the arguments need not be the same, however, as shown by the plus and quot examples above. However, some programming languages may be more restrictive or more liberal in this regard.

Scoping

In languages that allow procedural parameters, the scoping rules are usually defined in such a way that procedural parameters are executed in their native scope. More precisely, suppose that the function actf is passed as argument to P, as its procedural parameter f; and f is then called from inside the body of P. While actf is being executed, it sees the environment of its definition.[ example needed ]

The implementation of these scoping rules is not trivial. By the time that actf is finally executed, the activation records where its environment variables live may be arbitrarily deep in the stack. This is the so-called downwards funarg problem.

Example: Generic insertion sort

The concept of procedural parameter is best explained by examples. A typical application is the following generic implementation of the insertion sort algorithm, that takes two integer parameters a,b and two procedural parameters prec, swap:

procedureisort(a, b, prec, swap):     integeri, j;     ia;     whileibdoji;         whilej > aandprec(j, j−1) doswap(j, j−1);             jj−1;         ii+1;

This procedure can be used to sort the elements x[a] through x[b] of some array x, of arbitrary type, in a user-specified order. The parameters prec and swap should be two functions, defined by the client, both taking two integers r, s between a and b. The prec function should return true if and only if the data stored in x[r] should precede the data stored in x[s], in the ordering defined by the client. The swap function should exchange the contents of x[r] and x[s], and return no result.

By the proper choice of the functions prec and swap, the same isort procedure can be used to reorder arrays of any data type, stored in any medium and organized in any data structure that provides indexed access to individual array elements. (Note however that there are sorting algorithms that are much more efficient than insertion sort for large arrays.)

Sorting floating-point numbers

For instance, we can sort an array z of 20 floating-point numbers, z[1] through z[20] in increasing order by calling isort (1, 20,zprec,zswap), where the functions zprec and zswap are defined as

procedurezprec(r, s):     return (z[r] < z[s]);  procedurezswap(r, s):     floatt;     tz[r];     z[r] ← z[s];     z[s] ← t

Sorting rows of a matrix

For another example, let M be a matrix of integers with 10 rows and 20 columns, with indices starting at 1. The following code will rearrange the elements in each row so that all the even values come before all odd values:

integer i procedureeoprec(r, s):     return (M[i, r] mod 2) < (M[i, s] mod 2);  procedureeoswap(r, s):     integert;     tM[i,r];     M[i,r] ← M[i,s];     M[i,s] ← t;  fori from 1 to 10 doisort(1, 20, eoprec, eoswap);

Note that the effects of eoprec and eoswap depend on the row number i, but the isort procedure does not need to know that.

Vector-sorting procedure

The following example uses isort to define a procedure vecsort that takes an integer n and an integer vector v with elements v[0] through v[n−1] and sorts them in either increasing or decreasing order, depending on whether a third parameter incr is true or false, respectively:

procedurevecsort(n, v, incr):      procedurevprec(r, s):         ifincrthenreturnv[r] < v[s];         elsereturnv[r] > v[s];      procedurevswap(r, s):         integert;         tv[r];         v[r] ← v[s];         v[s] ← tisort(0, n−1, vprec, vswap);

Note the use of nested function definitions to get a function vprec whose effect depends on the parameter incr passed to vecsort. In languages that do not allow nested function definitions, like standard C, obtaining this effect would require rather complicated and/or thread-unsafe code.


Example: merging two sequences

The following example illustrates the use of procedural parameters to process abstract data structures independently of their concrete implementation. The problem is to merge two ordered sequences of records into a single sorted sequence, where the nature of the records and the ordering criterion can be chosen by the client. The following implementation assumes only that each record can be referenced by a memory address, and there is a "null address" Λ that is not the address of any valid record. The client must provide the addresses A, B of the first records in each sequence, and functions prec, next, and append, to be described later.

proceduremerge(A, B, prec, nextA, appendA, nextB, appendB):     addressini, fin, tini ← Λ; fin ← Λ     whileA ≠ Λ or B ≠ Λ doifB = Λ or (A ≠ Λ andB ≠ Λ andprec(A, B)) thentnextA(A)             fin ← appendA(A, fin); ifini = Λ theninifinAtelsetnextB(B)             finappendB(B, fin); ifini = Λ theninifinBtreturnini

The function prec should take the addresses r, s of two records, one from each sequence, and return true if the first record should come before the other in the output sequence. The function nextA should take the address of a record from the first sequence, and return the address of the next record in the same sequence, or Λ if there is none. The function appendA should append the first record from sequence A to the output sequence; its arguments are the address A of the record to be appended, and the address fin of the last record of the output list (or Λ if that list is still empty). The procedure appendA should return the updated address of the final element of the output list. The procedures nextB and appendB are analogous for the other input sequence.

Merging linked lists

To illustrate the use of the generic merge procedure, here is the code for merging two simple linked lists, starting with nodes at addresses R, S. Here we assume that each record x contains an integer field x.INFO and an address field x.NEXT that points to the next node; where the info fields are in increasing order in each list. The input lists are dismantled by the merge, and their nodes are used to build the output list.

procedurelistmerge(R, S):      procedureprec(r, s):         returnr.INFO < s.INFOprocedurenext(x):         returnx.NEXTprocedureappend(x, fin)         iffin ≠ Λ thenfin.NEXTxx.NEXT ← Λ         returnxreturnmerge(R, S, prec, next, append, next, append)

Merging vectors

The following code illustrates the independence of the generic merge procedure from the actual representation of the sequences. It merges the elements of two ordinary arrays U[0] through U[m−1] and V[0] through V[n−1] of floating-point numbers, in decreasing order. The input arrays are not modified, and the merged sequence of values is stored into a third vector W[0] through W[m+n−1]. As in the C programming language, we assume that the expression "&V" yields the address of variable V, "*p" yields the variable whose address is the value of p, and that "&(X[i])" is equivalent to "&(X[0]) + i" for any array X and any integer i.

procedurearraymerge(U, m, V, n, W):      procedureprec(r, s):         return (*r) > (*s)      procedurenextU(x):         ifx = &(U[m−1]) thenreturn Λ elsereturnx + 1      procedurenextV(x):         ifx = &(V[n−1]) thenreturn Λ elsereturnx + 1      procedureappend(x, fin)         iffin = Λ thenfin ← &(W[0])         (*fin) ← (*x)         returnfin + 1              ifm = 0 then U ← Λ     ifn = 0 then V ← Λ     returnmerge(U, V, prec, nextU, append, nextV, append)

Example: Definite integral

Integrating over an interval

The following procedure computes the approximate integral f (x) dx of a given real-valued function f over a given interval [a,b] of the real line. The numerical method used is the trapezium rule with a given number n of steps; the real numbers are approximated by floating-point numbers.

procedureIntg(f, a, b, n):     floatt, x, s; integeriifb = athenreturn 0     xa; sf(a) / 2;     for i from 1 ton−1 doti/(n+1); x ← (1−t) * a + t * b;         ss + f(x)     sf(b) / 2     return (ba) * s / n

Integrating over a disk

Consider now the problem of integrating a given function , with two arguments, over a disk with given center () and given radius . This problem can be reduced to two nested single-variable integrals by the change of variables

The following code implements the right-hand-side formula:

procedureDiskIntg(g, xc, yc, R, n)      proceduregring(z):          proceduregpolar(t):             floatx, yxxc + z * cos(t)             yyc + z * sin(t)             returng(x, y)          integermround(n*z/R)         returnz * Intg(gpolar, 0, 2*π, m)      returnIntg(gring, 0, R, n)

This code uses the integration procedure Intg in two levels. The outer level (last line) uses Intg to compute the integral of for varying from 0 to . The inner level (next-to-last line) defines as being the line integral of over the circle with center and radius .

History

Procedural parameters were invented before the age of electronic computers, by mathematician Alonzo Church, as part of his lambda calculus model of computation.

Procedural parameters as a programming language feature were introduced by ALGOL 60. In fact, ALGOL 60 had a powerful "call by name" parameter-passing mechanism that could simplify some uses of procedural parameters; see Jensen's Device.

Procedural parameters were an essential feature of the LISP programming language, which also introduced the concept of function closure or funarg. The C programming language allows function pointers to be passed as parameters, which accomplish the same end, and are often used as callbacks in event-driven programming and as error handlers. However, only a few modern C compilers allow nested function definitions, so that its other uses are relatively uncommon. Procedural parameters were provided also in Pascal, together with nested procedure definitions; however, since standard Pascal did not allow separate compilation, the feature was little used in that language, too.

See also

Related Research Articles

Negative binomial distribution Probability distribution

In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of successes in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of failures occurs. For example, we can define rolling a 6 on a dice as a failure, and rolling any other number as a success, and ask how many successful rolls will occur before we see the third failure. In such a case, the probability distribution of the number of non-6s that appear will be a negative binomial distribution. We could similarly use the negative binomial distribution to model the number of days a certain machine works before it breaks down.

Exponential distribution Probability distribution

In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

In mathematics and computer science in general, a fixed point of a function is a value that is mapped to itself by the function.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the form

In mathematics and its applications, classical Sturm–Liouville theory is the theory of real second-order linear ordinary differential equations of the form:

In computer science, a generator is a routine that can be used to control the iteration behaviour of a loop. All generators are also iterators. A generator is very similar to a function that returns an array, in that a generator has parameters, can be called, and generates a sequence of values. However, instead of building an array containing all the values and returning them all at once, a generator yields the values one at a time, which requires less memory and allows the caller to get started processing the first few values immediately. In short, a generator looks like a function but behaves like an iterator.

Linear time-invariant system Mathematical model

In system analysis, among other fields of study, a linear time-invariant system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = x(t) ∗ h(t) where h(t) is called the system's impulse response and ∗ represents convolution. What's more, there are systematic methods for solving any such system, whereas systems not meeting both properties are generally more difficult to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

In economics, an ordinal utility function is a function representing the preferences of an agent on an ordinal scale. Ordinal utility theory claims that it is only meaningful to ask which option is better than the other, but it is meaningless to ask how much better it is or how good it is. All of the theory of consumer decision-making under conditions of certainty can be, and typically is, expressed in terms of ordinal utility.

Noncentral chi-squared distribution

In probability theory and statistics, the noncentral chi-squared distribution is a noncentral generalization of the chi-squared distribution. It often arises in the power analysis of statistical tests in which the null distribution is a chi-squared distribution; important examples of such tests are the likelihood-ratio tests.

In number theory, a Dudeney number in a given number base is a natural number equal to the perfect cube of another natural number such that the digit sum of the first natural number is equal to the second. The name derives from Henry Dudeney, who noted the existence of these numbers in one of his puzzles, Root Extraction, where a professor in retirement at Colney Hatch postulates this as a general method for root extraction.

In mathematics, Church encoding is a means of representing data and operators in the lambda calculus. The Church numerals are a representation of the natural numbers using lambda notation. The method is named for Alonzo Church, who first encoded data in the lambda calculus this way.

Characteristic function (probability theory)

In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.

In number theory, a narcissistic number in a given number base is a number that is the sum of its own digits each raised to the power of the number of digits.

In mathematics, the theta representation is a particular representation of the Heisenberg group of quantum mechanics. It gains its name from the fact that the Jacobi theta function is invariant under the action of a discrete subgroup of the Heisenberg group. The representation was popularized by David Mumford.

Hilbert space Generalization of Euclidean space allowing infinite dimensions

In mathematics, Hilbert spaces allow generalizing the methods of linear algebra and calculus from the finite-dimensional Euclidean spaces to spaces that may not have a finite dimension. A Hilbert space is a vector space equipped with an inner product which allows defining a distance function so that it becomes a complete metric space. They serve as a first template for extending the differential and integral calculus that is normally done in Rn, though this can be done more generally using normed spaces.

Poisson distribution Discrete probability distribution

In probability theory and statistics, the Poisson distribution, named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified interval types such as distance, area or volume.

In computer science, partial application refers to the process of fixing a number of arguments to a function, producing another function of smaller arity. Given a function , we might fix the first argument, producing a function of type . Evaluation of this function might be represented as . Note that the result of partial function application in this case is a function that takes two arguments. Partial application is sometimes incorrectly called currying, which is a related, but distinct concept.

Wrapped exponential distribution

In probability theory and directional statistics, a wrapped exponential distribution is a wrapped probability distribution that results from the "wrapping" of the exponential distribution around the unit circle.

In probability and statistics, the generalized integer gamma distribution (GIG) is the distribution of the sum of independent gamma distributed random variables, all with integer shape parameters and different rate parameters. This is a special case of the generalized chi-squared distribution. A related concept is the generalized near-integer gamma distribution (GNIG).

Tau functions are an important ingredient in the modern theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form. The term Tau function, or -function, was first used systematically by Mikio Sato and his students in the specific context of the Kadomtsev–Petviashvili equation, and related integrable hierarchies. It is a central ingredient in the theory of solitons. Tau functions also appear as matrix model partition functions in the spectral theory of Random Matrices, and may also serve as generating functions, in the sense of combinatorics and enumerative geometry, especially in relation to moduli spaces of Riemann surfaces, and enumeration of branched coverings, or so-called Hurwitz numbers.