A direct function (dfn, pronounced "dee fun") is an alternative way to define a function and operator (a higher-order function) in the programming language APL. A direct operator can also be called a dop (pronounced "dee op"). They were invented by John Scholes in 1996. [1] They are a unique combination of array programming, higher-order function, and functional programming, and are a major distinguishing advance of early 21st century APL over prior versions.
A dfn is a sequence of possibly guarded expressions (or just a guard) between {
and }
, separated by ⋄
or new-lines, wherein ⍺
denotes the left argument and ⍵
the right, and ∇
denotes recursion (function self-reference). For example, the function PT
tests whether each row of ⍵
is a Pythagorean triplet (by testing whether the sum of squares equals twice the square of the maximum).
PT←{(+/⍵*2)=2×(⌈/⍵)*2}PT3451x453311651312171681112417158PTx101001
The factorial function as a dfn:
fact←{0=⍵:1⋄⍵×∇⍵-1}fact5120fact¨⍳10⍝ fact applied to each element of 0 to 9112624120720504040320362880
The rules for dfns are summarized by the following "reference card": [2]
{⍺function⍵} | {⍺⍺operator⍵⍵} | : guard |
⍺ left argument | ⍺⍺ left operand | :: error-guard |
⍵ right argument | ⍵⍵ right operand | ⍺← default left argument |
∇ self-reference | ∇∇ self-reference | s← shy result |
A dfn is a sequence of possibly guarded expressions (or just a guard) between {
and }
, separated by ⋄
or new-lines.
expressionguard:expressionguard:
The expressions and/or guards are evaluated in sequence. A guard must evaluate to a 0 or 1; its associated expression is evaluated if the value is 1. A dfn terminates after the first unguarded expression which does not end in assignment, or after the first guarded expression whose guard evaluates to 1, or if there are no more expressions. The result of a dfn is that of the last evaluated expression. If that last evaluated expression ends in assignment, the result is "shy"—not automatically displayed in the session.
Names assigned in a dfn are local by default, with lexical scope.
⍺
denotes the left function argument and ⍵
the right; ⍺⍺
denotes the left operand and ⍵⍵
the right. If ⍵⍵
occurs in the definition, then the dfn is a dyadic operator; if only ⍺⍺
occurs but not ⍵⍵
, then it is a monadic operator; if neither ⍺⍺
or ⍵⍵
occurs, then the dfn is a function.
The special syntax ⍺←expression
is used to give a default value to the left argument if a dfn is called monadically, that is, called with no left argument. The ⍺←expression
is not evaluated otherwise.
∇
denotes recursion or self-reference by the function, and ∇∇
denotes self-reference by the operator. Such denotation permits anonymous recursion.
Error trapping is provided through error-guards, errnums::expression
. When an error is generated, the system searches dynamically through the calling functions for an error-guard that matches the error. If one is found, the execution environment is unwound to its state immediately prior to the error-guard's execution and the associated expression of the error-guard is evaluated as the result of the dfn.
Additional descriptions, explanations, and tutorials on dfns are available in the cited articles. [3] [4] [5] [6] [7]
The examples here illustrate different aspects of dfns. Additional examples are found in the cited articles. [8] [9] [10]
The function {⍺+0j1×⍵}
adds ⍺
to 0j1
(i or ) times ⍵
.
3{⍺+0j1×⍵}43J4∘.{⍺+0j1×⍵}⍨¯2+⍳5¯2J¯2¯2J¯1¯2¯2J1¯2J2¯1J¯2¯1J¯1¯1¯1J1¯1J20J¯20J¯100J10J21J¯21J¯111J11J22J¯22J¯122J12J2
The significance of this function can be seen as follows:
Complex numbers can be constructed as ordered pairs of real numbers, similar to how integers can be constructed as ordered pairs of natural numbers and rational numbers as ordered pairs of integers. For complex numbers,
{⍺+0j1×⍵}
plays the same role as-
for integers and÷
for rational numbers. [11] : §8
Moreover, analogous to that monadic -⍵
⇔ 0-⍵
(negate) and monadic ÷⍵
⇔ 1÷⍵
(reciprocal), a monadic definition of the function is useful, effected by specifying a default value of 0 for ⍺
: if j←{⍺←0⋄⍺+0j1×⍵}
, then j⍵
⇔ 0j⍵
⇔ 0+0j1×⍵
.
j←{⍺←0⋄⍺+0j1×⍵}3j4¯5.67.893J43J¯5.63J7.89j4¯5.67.890J40J¯5.60J7.89sin←1∘○cos←2∘○Euler←{(*j⍵)=(cos⍵)j(sin⍵)}Euler(¯0.5+?10⍴0)j(¯0.5+?10⍴0)1111111111
The last expression illustrates Euler's formula on ten random numbers with real and imaginary parts in the interval .
The ternary construction of the Cantor set starts with the interval [0,1] and at each stage removes the middle third from each remaining subinterval:
The Cantor set of order ⍵
defined as a dfn: [11] : §2.5
Cantor←{0=⍵:,1⋄,101∘.∧∇⍵-1}Cantor01Cantor1101Cantor2101000101Cantor3101000101000000000101000101
Cantor 0 to Cantor 6 depicted as black bars:
The function sieve⍵
computes a bit vector of length ⍵
so that bit i
(for 0≤i
and i<⍵
) is 1 if and only if i
is a prime. [10] : §46
sieve←{4≥⍵:⍵⍴0011r←⌊0.5*⍨n←⍵p←235711131719232931374143p←(1+(n≤×⍀p)⍳1)↑pb←0@1⊃{(m⍴⍵)>m⍴⍺↑1⊣m←n⌊⍺×≢⍵}⌿⊖1,p{r<q←b⍳1:b⊣b[⍵]←1⋄b[q,q×⍸b↑⍨⌈n÷q]←0⋄∇⍵,q}p}1010⍴sieve1000011010100010100010100010000010100000100010100010000010000010100000100010100000100010000010000000100b←sieve1e9≢b1000000000(10*⍳10)(+⌿↑)⍤01⊢b04251681229959278498664579576145550847534
The last sequence, the number of primes less than powers of 10, is an initial segment of OEIS: A006880 . The last number, 50847534, is the number of primes less than . It is called Bertelsen's number, memorably described by MathWorld as "an erroneous name erroneously given the erroneous value of ". [12]
sieve
uses two different methods to mark composites with 0s, both effected using local anonymous dfns: The first uses the sieve of Eratosthenes on an initial mask of 1 and a prefix of the primes 2 3...43, using the insert operator ⌿
(right fold). (The length of the prefix obtains by comparison with the primorial function ×⍀p
.) The second finds the smallest new prime q
remaining in b
(q←b⍳1
), and sets to 0 bit q
itself and bits at q
times the numbers at remaining 1 bits in an initial segment of b
(⍸b↑⍨⌈n÷q
). This second dfn uses tail recursion.
Typically, the factorial function is define recursively (as above), but it can be coded to exploit tail recursion by using an accumulator left argument: [13]
fac←{⍺←1⋄⍵=0:⍺⋄(⍺×⍵)∇⍵-1}
Similarly, the determinant of a square complex matrix using Gaussian elimination can be computed with tail recursion: [14]
det←{⍝ determinant of a square complex matrix⍺←1⍝ product of co-factor coefficients so far0=≢⍵:⍺⍝ result for 0-by-0(ij)←(⍴⍵)⊤⊃⍒|,⍵⍝ row and column index of the maximal elementk←⍳≢⍵(⍺×⍵[i;j]ׯ1*i+j)∇⍵[k~i;k~j]-⍵[k~i;j]∘.×⍵[i;k~j]÷⍵[i;j]}
A partition of a non-negative integer is a vector of positive integers such that n=+⌿v
, where the order in is not significant. For example, 22
and 211
are partitions of 4, and 211
and 121
and 112
are considered to be the same partition.
The partition function counts the number of partitions. The function is of interest in number theory, studied by Euler, Hardy, Ramanujan, Erdős, and others. The recurrence relation
derived from Euler's pentagonal number theorem. [15] Written as a dfn: [10] : §16
pn←{1≥⍵:0≤⍵⋄-⌿+⌿∇¨rec⍵}rec←{⍵-(÷∘2(×⍤1)¯11∘.+3∘×)1+⍳⌈0.5*⍨⍵×2÷3}pn1042pn¨⍳13⍝ OEIS A00004111235711152230425677
The basis step 1≥⍵:0≤⍵
states that for 1≥⍵
, the result of the function is 0≤⍵
, 1 if ⍵ is 0 or 1 and 0 otherwise. The recursive step is highly multiply recursive. For example, pn200
would result in the function being applied to each element of rec200
, which are:
rec200199195188178165149130108835524¯10198193185174160143123100744513¯22
and pn200
requires longer than the age of the universe to compute ( function calls to itself). [10] : §16 The compute time can be reduced by memoization, here implemented as the direct operator (higher-order function) M
:
M←{f←⍺⍺i←2+'⋄'⍳⍨t←2↓,⎕cr'f'⍎'{T←(1+⍵)⍴¯1 ⋄ ',(i↑t),'¯1≢T[⍵]:⊃T[⍵] ⋄ ⊃T[⍵]←⊂',(i↓t),'⍵}⍵'}pnM2003.973E120⍕pnM200⍝ format to 0 decimal places3972999029388
This value of pnM200
agrees with that computed by Hardy and Ramanujan in 1918. [16]
The memo operator M
defines a variant of its operand function ⍺⍺
to use a cache T
and then evaluates it. With the operand pn
the variant is:
{T←(1+⍵)⍴¯1⋄{1≥⍵:0≤⍵⋄¯1≢T[⍵]:⊃T[⍵]⋄⊃T[⍵]←⊂-⌿+⌿∇¨rec⍵}⍵}
Quicksort on an array ⍵
works by choosing a "pivot" at random among its major cells, then catenating the sorted major cells which strictly precede the pivot, the major cells equal to the pivot, and the sorted major cells which strictly follow the pivot, as determined by a comparison function ⍺⍺
. Defined as a direct operator (dop) Q
:
Q←{1≥≢⍵:⍵⋄(∇⍵⌿⍨0>s)⍪(⍵⌿⍨0=s)⍪∇⍵⌿⍨0<s←⍵⍺⍺⍵⌷⍨?≢⍵}⍝ precedes ⍝ follows ⍝ equals2(×-)88(×-)28(×-)8¯110x←2193836941970101514(×-)Qx0233467891014151919
Q3
is a variant that catenates the three parts enclosed by the function ⊂
instead of the parts per se. The three parts generated at each recursive step are apparent in the structure of the final result. Applying the function derived from Q3
to the same argument multiple times gives different results because the pivots are chosen at random. In-order traversal of the results does yield the same sorted array.
Q3←{1≥≢⍵:⍵⋄(⊂∇⍵⌿⍨0>s)⍪(⊂⍵⌿⍨0=s)⍪⊂∇⍵⌿⍨0<s←⍵⍺⍺⍵⌷⍨?≢⍵}(×-)Q3x┌────────────────────────────────────────────┬─────┬┐│┌──────────────┬─┬─────────────────────────┐│1919││││┌──────┬───┬─┐│6│┌──────┬─┬──────────────┐│││││││┌┬─┬─┐│33│4││││┌┬─┬─┐│9│┌┬──┬────────┐││││││││││0│2│││││││││7│8│││││10│┌──┬──┬┐│││││││││└┴─┴─┘││││││└┴─┴─┘││││││14│15││││││││││└──────┴───┴─┘││││││││└──┴──┴┘│││││││││││││└┴──┴────────┘│││││││││└──────┴─┴──────────────┘│││││└──────────────┴─┴─────────────────────────┘│││└────────────────────────────────────────────┴─────┴┘(×-)Q3x┌───────────────────────────┬─┬─────────────────────────────┐│┌┬─┬──────────────────────┐│7│┌────────────────────┬─────┬┐││││0│┌┬─┬─────────────────┐││││┌──────┬──┬────────┐│1919│││││││││2│┌────────────┬─┬┐││││││┌┬─┬─┐│10│┌──┬──┬┐│││││││││││││┌───────┬─┬┐│6││││││││││8│9││││14│15││││││││││││││││┌┬───┬┐│4│││││││││││└┴─┴─┘││└──┴──┴┘││││││││││││││││33│││││││││││││└──────┴──┴────────┘│││││││││││││└┴───┴┘││││││││││└────────────────────┴─────┴┘│││││││││└───────┴─┴┘│││││││││││││││└────────────┴─┴┘│││││││││└┴─┴─────────────────┘│││││└┴─┴──────────────────────┘│││└───────────────────────────┴─┴─────────────────────────────┘
The above formulation is not new; see for example Figure 3.7 of the classic The Design and Analysis of Computer Algorithms. [17] However, unlike the pidgin ALGOL program in Figure 3.7, Q
is executable, and the partial order used in the sorting is an operand, the (×-)
the examples above. [9]
Dfns, especially anonymous dfns, work well with operators and trains. The following snippet solves a "Programming Pearls" puzzle: [18] given a dictionary of English words, here represented as the character matrix a
, find all sets of anagrams.
a{⍵[⍋⍵]}⍤1⊢a({⍵[⍋⍵]}⍤1{⊂⍵}⌸⊢)apatsapst┌────┬────┬────┐spatapst│pats│teas│star│teasaest│spat│sate││sateaest│taps│etas││tapsapst│past│seat││etasaest││eats││pastapst││tase││seataest││east││eatsaest││seta││taseaest└────┴────┴────┘stararsteastaestsetaaest
The algorithm works by sorting the rows individually ({⍵[⍋⍵]}⍤1⊢a
), and these sorted rows are used as keys ("signature" in the Programming Pearls description) to the key operator ⌸
to group the rows of the matrix. [9] : §3.3 The expression on the right is a train, a syntactic form employed by APL to achieve tacit programming. Here, it is an isolated sequence of three functions such that (fgh)⍵
⇔ (f⍵)g(h⍵)
, whence the expression on the right is equivalent to ({⍵[⍋⍵]}⍤1⊢a){⊂⍵}⌸a
.
When an inner (nested) dfn refers to a name, it is sought by looking outward through enclosing dfns rather than down the call stack. This regime is said to employ lexical scope instead of APL's usual dynamic scope. The distinction becomes apparent only if a call is made to a function defined at an outer level. For the more usual inward calls, the two regimes are indistinguishable. [19] : p.137
For example, in the following function which
, the variable ty
is defined both in which
itself and in the inner function f1
. When f1
calls outward to f2
and f2
refers to ty
, it finds the outer one (with value 'lexical'
) rather than the one defined in f1
(with value 'dynamic'
):
which←{ty←'lexical'f1←{ty←'dynamic'⋄f2⍵}f2←{ty,⍵}f1⍵}which' scope'lexicalscope
The following function illustrates use of error guards: [19] : p.139
plus←{tx←'catch all'⋄0::txtx←'domain'⋄11::txtx←'length'⋄5::tx⍺+⍵}2plus3⍝ no errors52345plus'three'⍝ argument lengths don't matchlength2345plus'four'⍝ can't add charactersdomain23plus34⍴5⍝ can't add vector to matrixcatchall
In APL, error number 5 is "length error"; error number 11 is "domain error"; and error number 0 is a "catch all" for error numbers 1 to 999.
The example shows the unwinding of the local environment before an error-guard's expression is evaluated. The local name tx
is set to describe the purview of its following error-guard. When an error occurs, the environment is unwound to expose tx
's statically correct value.
Since direct functions are dfns, APL functions defined in the traditional manner are referred to as tradfns, pronounced "trad funs". Here, dfns and tradfns are compared by consideration of the function sieve
: On the left is a dfn (as defined above); in the middle is a tradfn using control structures; on the right is a tradfn using gotos (→
) and line labels.
sieve←{4≥⍵:⍵⍴0011r←⌊0.5*⍨n←⍵p←235711131719232931374143p←(1+(n≤×⍀p)⍳1)↑pb←0@1⊃{(m⍴⍵)>m⍴⍺↑1⊣m←n⌊⍺×≢⍵}⌿⊖1,p{r<q←b⍳1:b⊣b[⍵]←1⋄b[q,q×⍸b↑⍨⌈n÷q]←0⋄∇⍵,q}p} | ∇b←sieve1n;i;m;p;q;r:If4≥n⋄b←n⍴0011⋄:Return⋄:EndIfr←⌊0.5*⍨np←235711131719232931374143p←(1+(n≤×⍀p)⍳1)↑pb←1:Forq:Inp⋄b←(m⍴b)>m⍴q↑1⊣m←n⌊q×≢b⋄:EndForb[1]←0:Whiler≥q←b⍳1⋄b[q,q×⍸b↑⍨⌈n÷q]←0⋄p⍪←q⋄:EndWhileb[p]←1∇ | ∇b←sieve2n;i;m;p;q;r→L10⍴⍨4<n⋄b←n⍴0011⋄→0L10:r←⌊0.5*⍨np←235711131719232931374143p←(1+(n≤×\p)⍳1)↑pi←0⋄b←1L20:b←(m⍴b)>m⍴p[i]↑1⊣m←n⌊p[i]×≢b→L20⍴⍨(≢p)>i←1+ib[1]←0L30:→L40⍴⍨r<q←b⍳1⋄b[q,q×⍸b↑⍨⌈n÷q]←0⋄p⍪←q⋄→L30L40:b[p]←1∇ |
←
); a tradfn is named by embedding the name in the representation of the function and applying ⎕fx
(a system function) to that representation.⍺
and ⍵
and the operands of a dop are named ⍺⍺
and ⍵⍵
; the arguments and operands of a tradfn can have any name, specified on its leading line.∇
or ∇∇
or its name; recursion in a tradfn is effected by invoking its name.→
(goto) and line labels.→
(goto) line 0 or a non-existing line, or on evaluating a :Return
control structure, or after the last line.Kenneth E. Iverson, the inventor of APL, was dissatisfied with the way user functions (tradfns) were defined. In 1974, he devised "formal function definition" or "direct definition" for use in exposition. [20] A direct definition has two or four parts, separated by colons:
name:expressionname:expression0:proposition:expression1
Within a direct definition, ⍺
denotes the left argument and ⍵
the right argument. In the first instance, the result of expression
is the result of the function; in the second instance, the result of the function is that of expression0
if proposition
evaluates to 0, or expression1
if it evaluates to 1. Assignments within a direct definition are dynamically local. Examples of using direct definition are found in the 1979 Turing Award Lecture [21] and in books and application papers. [22] [23] [24] [25] [9]
Direct definition was too limited for use in larger systems. The ideas were further developed by multiple authors in multiple works [26] : §8 [27] [28] : §4.17 [29] [30] [31] [32] but the results were unwieldy. Of these, the "alternative APL function definition" of Bunda in 1987 [31] came closest to current facilities, but is flawed in conflicts with existing symbols and in error handling which would have caused practical difficulties, and was never implemented. The main distillates from the different proposals were that (a) the function being defined is anonymous, with subsequent naming (if required) being effected by assignment; (b) the function is denoted by a symbol and thereby enables anonymous recursion. [9]
In 1996, John Scholes of Dyalog Limited invented direct functions (dfns). [1] [6] [7] The ideas originated in 1989 when he read a special issue of The Computer Journal on functional programming. [33] He then proceeded to study functional programming and became strongly motivated ("sick with desire", like Yeats) to bring these ideas to APL. [6] [7] He initially operated in stealth because he was concerned the changes might be judged too radical and an unnecessary complication of the language; other observers say that he operated in stealth because Dyalog colleagues were not so enamored and thought he was wasting his time and causing trouble for people. Dfns were first presented in the Dyalog Vendor Forum at the APL '96 Conference and released in Dyalog APL in early 1997. [1] Acceptance and recognition were slow in coming. As late as 2008, in Dyalog at 25, [34] a publication celebrating the 25th anniversary of Dyalog Limited, dfns were barely mentioned (mentioned twice as "dynamic functions" and without elaboration). As of 2019 [update] , dfns are implemented in Dyalog APL, [19] NARS2000, [35] and ngn/apl. [36] They also play a key role in efforts to exploit the computing abilities of a graphics processing unit (GPU). [37] [9]