Haskell features

Last updated

This article describes the features in the programming language Haskell .

Contents

Examples

Factorial

A simple example that is often used to demonstrate the syntax of functional languages is the factorial function for non-negative integers, shown in Haskell:

factorial::Integer->Integerfactorial0=1factorialn=n*factorial(n-1)

Or in one line:

factorialn=ifn>1thenn*factorial(n-1)else1

This describes the factorial as a recursive function, with one terminating base case. It is similar to the descriptions of factorials found in mathematics textbooks. Much of Haskell code is similar to standard mathematical notation in facility and syntax.

The first line of the factorial function describes the type of this function; while it is optional, it is considered to be good style [1] to include it. It can be read as the function factorial (factorial) has type (::) from integer to integer (Integer -> Integer). That is, it takes an integer as an argument, and returns another integer. The type of a definition is inferred automatically if no type annotation is given.

The second line relies on pattern matching, an important feature of Haskell. Note that parameters of a function are not in parentheses but separated by spaces. When the function's argument is 0 (zero) it will return the integer 1 (one). For all other cases the third line is tried. This is the recursion, and executes the function again until the base case is reached.

Using the product function from the Prelude, a number of small functions analogous to C's standard library, and using the Haskell syntax for arithmetic sequences, the factorial function can be expressed in Haskell as follows:

factorialn=product[1..n]

Here [1..n] denotes the arithmetic sequence 1, 2, …, n in list form. Using the Prelude function enumFromTo, the expression [1..n] can be written as enumFromTo 1 n, allowing the factorial function to be expressed as

factorialn=product(enumFromTo1n)

which, using the function composition operator (expressed as a dot in Haskell) to compose the product function with the curried enumeration function can be rewritten in point-free style: [2]

factorial=product.enumFromTo1

In the Hugs interpreter, one often needs to define the function and use it on the same line separated by a where or let..in. For example, to test the above examples and see the output 120:

let{factorialn|n>0=n*factorial(n-1);factorial_=1}infactorial5

or

factorial5wherefactorial=product.enumFromTo1

The GHCi interpreter doesn't have this restriction and function definitions can be entered on one line (with the let syntax without the in part), and referenced later.

More complex examples

Calculator

In the Haskell source immediately below, :: can be read as "has type"; a -> b can be read as "is a function from a to b". (Thus the Haskell calc :: String -> [Float] can be read as "calc has type of a function from Strings to lists of Floats".) In the second line calc = ... the equals sign can be read as "can be"; thus multiple lines with calc = ... can be read as multiple possible values for calc, depending on the circumstance detailed in each line.

A simple Reverse Polish notation calculator expressed with the higher-order function foldl whose argument f is defined in a where clause using pattern matching and the type class Read:

calc::String->[Float]calc=foldlf[].wordswheref(x:y:zs)"+"=(y+x):zsf(x:y:zs)"-"=(y-x):zsf(x:y:zs)"*"=(y*x):zsf(x:y:zs)"/"=(y/x):zsf(x:y:zs)"FLIP"=y:x:zsfzsw=readw:zs

The empty list is the initial state, and f interprets one word at a time, either as a function name, taking two numbers from the head of the list and pushing the result back in, or parsing the word as a floating-point number and prepending it to the list.

Fibonacci sequence

The following definition produces the list of Fibonacci numbers in linear time:

fibs=0:1:zipWith(+)fibs(tailfibs)

The infinite list is produced by corecursion — the latter values of the list are computed on demand starting from the initial two items 0 and 1. This kind of a definition relies on lazy evaluation, an important feature of Haskell programming. For an example of how the evaluation evolves, the following illustrates the values of fibs and tail fibs after the computation of six items and shows how zipWith (+) has produced four items and proceeds to produce the next item:

fibs         = 0 : 1 : 1 : 2 : 3 : 5 : ...                +   +   +   +   +   + tail fibs    = 1 : 1 : 2 : 3 : 5 : ...                =   =   =   =   =   = zipWith ...  = 1 : 2 : 3 : 5 : 8 : ... fibs = 0 : 1 : 1 : 2 : 3 : 5 : 8 : ...

The same function, written using Glasgow Haskell Compiler's parallel list comprehension syntax (GHC extensions must be enabled using a special command-line flag, here -XParallelListComp, or by starting the source file with {-# LANGUAGE ParallelListComp #-}):

fibs=0:1:[a+b|a<-fibs|b<-tailfibs]

or with regular list comprehensions:

fibs=0:1:[a+b|(a,b)<-zipfibs(tailfibs)]

or directly self-referencing:

fibs=0:1:nextfibswherenext(a:t@(b:_))=(a+b):nextt

With stateful generating function:

fibs=next(0,1)wherenext(a,b)=a:next(b,a+b)

or with unfoldr:

fibs=unfoldr(\(a,b)->Just(a,(b,a+b)))(0,1)

or scanl:

fibs=0:scanl(+)1fibs

Using data recursion with Haskell's predefined fixpoint combinator:

fibs=fix(\xs->0:1:zipWith(+)xs(tailxs))-- zipWith version=fix((0:).(1:).(zipWith(+)<*>tail))-- same as above, pointfree=fix((0:).scanl(+)1)-- scanl version

Factorial

The factorial we saw previously can be written as a sequence of functions:

factorialn=foldr((.).(*))id[1..n]$1-- factorial 5 == ((1*) .) ( ((2*) .) ( ((3*) .) ( ((4*) .) ( ((5*) .) id )))) 1--             == (1*) . (2*) . (3*) . (4*) . (5*) . id $ 1--             ==  1*  (  2*  (  3*  (  4*  (  5*  ( id   1 )))))factorialn=foldr((.).(*))(const1)[1..n]$()-- factorial 5 == ((1*) .) ( ((2*) .) ( ((3*) .) ( ((4*) .) ( ((5*) .) (const 1) )))) ()--             == (1*) . (2*) . (3*) . (4*) . (5*) . const 1 $ ()--             ==  1*  (  2*  (  3*  (  4*  (  5*  ( const 1   () )))))factorialn=foldr(($).(*))1[1..n]=foldr($)1$map(*)[1..n]-- factorial 5 == ((1*) $) ( ((2*) $) ( ((3*) $) ( ((4*) $) ( ((5*) $) 1 ))))--             == (1*) $ (2*) $ (3*) $ (4*) $ (5*) $ 1--             ==  1*  (  2*  (  3*  (  4*  (  5*    1 ))))

More examples

Hamming numbers

A remarkably concise function that returns the list of Hamming numbers in order:

hamming=1:map(2*)hamming`union`map(3*)hamming`union`map(5*)hamming

Like the various fibs solutions displayed above, this uses corecursion to produce a list of numbers on demand, starting from the base case of 1 and building new items based on the preceding part of the list.

Here the function union is used as an operator by enclosing it in back-quotes. Its case clauses define how it merges two ascending lists into one ascending list without duplicate items, representing sets as ordered lists. Its companion function minus implements set difference:

union(x:xs)(y:ys)=casecomparexyofLT->x:unionxs(y:ys)EQ->x:unionxsysGT->y:union(x:xs)ysunionxs[]=xsunion[]ys=ys
minus(x:xs)(y:ys)=casecomparexyofLT->x:minusxs(y:ys)EQ->minusxsysGT->minus(x:xs)ysminusxs_=xs--

It is possible to generate only the unique multiples, for more efficient operation. Since there are no duplicates, there's no need to remove them:

smooth235=1:foldr(\ps->fix$mergeBy(<)s.map(p*).(1:))[][2,3,5]wherefixf=xwherex=fx-- fixpoint combinator, with sharing

This uses the more efficient function merge which doesn't concern itself with the duplicates (also used in the following next function, mergesort ):

mergeBylessxsys=mergexsyswheremergexs[]=xsmerge[]ys=ysmerge(x:xs)(y:ys)|lessyx=y:merge(x:xs)ys|otherwise=x:mergexs(y:ys)

Each vertical bar ( | ) starts a guard clause with a guard expression before the = sign and the corresponding definition after it, that is evaluated if the guard is true.

Mergesort

Here is a bottom-up merge sort, defined using the higher-order function until:

mergesortByless[]=[]mergesortBylessxs=head$until(null.tail)(pairwise$mergeByless)[[x]|x<-xs]pairwisef(a:b:t)=fab:pairwiseftpairwiseft=t

Prime numbers

The mathematical definition of primes can be translated pretty much word for word into Haskell:

-- "Integers above 1 that cannot be divided by a smaller integer above 1"-- primes = { n ∈ [2..] | ~ ∃ d ∈ [2..n-1] ⇒ rem n d = 0  }--        = { n ∈ [2..] |   ∀ d ∈ [2..n-1] ⇒ rem n d ≠ 0  }primes=[n|n<-[2..],all(\d->remnd/=0)[2..(n-1)]]

This finds primes by trial division. Note that it is not optimized for efficiency and has very poor performance. Slightly faster (but still very slow) [3] is this code by David Turner:

primes=sieve[2..]wheresieve(p:xs)=p:sieve[x|x<-xs,remxp/=0]

Much faster is the optimal trial division algorithm

primes=2:[n|n<-[3..],all((>0).remn)$takeWhile((<=n).(^2))primes]

or an unbounded sieve of Eratosthenes with postponed sieving in stages, [4]

primes=2:sieveprimes[3..]wheresieve(p:ps)(span(<p*p)->(h,t))=h++sieveps(minust[p*p,p*p+p..])

or the combined sieve implementation by Richard Bird, [5]

-- "Integers above 1 without any composite numbers which--  are found by enumeration of each prime's multiples"primes=2:minus[3..](foldr(\(m:ms)r->m:unionmsr)[][[p*p,p*p+p..]|p<-primes])

or an even faster tree-like folding variant [6] with nearly optimal (for a list-based code) time complexity and very low space complexity achieved through telescoping multistage recursive production of primes:

primes=2:_Y((3:).minus[5,7..]._U.map(\p->[p*p,p*p+2*p..]))where-- non-sharing Y combinator:_Yg=g(_Yg)-- (g (g (g (g (...)))))-- big union   ~= nub.sort.concat_U((x:xs):t)=x:(unionxs._U.pairwiseunion)t

Working on arrays by segments between consecutive squares of primes, it's

importData.ArrayimportData.List(tails,inits)primes=2:[n|(r:q:_,px)<-zip(tails(2:[p*p|p<-primes]))(initsprimes),(n,True)<-assocs(accumArray(\__->False)True(r+1,q-1)[(m,())|p<-px,s<-[div(r+p)p*p],m<-[s,s+p..q-1]])]

The shortest possible code is probably nubBy (((>1) .) . gcd) [2..].  It is quite slow.

Syntax

Layout

Haskell allows indentation to be used to indicate the beginning of a new declaration. For example, in a where clause:

productxs=prodxs1whereprod[]a=aprod(x:xs)a=prodxs(a*x)

The two equations for the nested function prod are aligned vertically, which allows the semi-colon separator to be omitted. In Haskell, indentation can be used in several syntactic constructs, including do, let, case, class, and instance.

The use of indentation to indicate program structure originates in Peter J. Landin's ISWIM language, where it was called the off-side rule. This was later adopted by Miranda, and Haskell adopted a similar (but rather more complex) version of Miranda's off-side rule, which is called "layout". Other languages to adopt whitespace character-sensitive syntax include Python and F#.

The use of layout in Haskell is optional. For example, the function product above can also be written:

productxs=prodxs1where{prod[]a=a;prod(x:xs)a=prodxs(a*x)}

The explicit open brace after the where keyword indicates that separate declarations will use explicit semi-colons, and the declaration-list will be terminated by an explicit closing brace. One reason for wanting support for explicit delimiters is that it makes automatic generation of Haskell source code easier.

Haskell's layout rule has been criticised for its complexity. In particular, the definition states that if the parser encounters a parse error during processing of a layout section, then it should try inserting a close brace (the "parse error" rule). Implementing this rule in a traditional parsing and lexical analysis combination requires two-way cooperation between the parser and lexical analyser, whereas in most languages, these two phases can be considered independently.

Function calls

Applying a function f to a value x is expressed as simply f x.

Haskell distinguishes function calls from infix operators syntactically, but not semantically. Function names which are composed of punctuation characters can be used as operators, as can other function names if surrounded with backticks; and operators can be used in prefix notation if surrounded with parentheses.

This example shows the ways that functions can be called:

addab=a+bten1=5+5ten2=(+)55ten3=add55ten4=5`add`5

Functions which are defined as taking several parameters can always be partially applied. Binary operators can be partially applied using section notation:

ten5=(+5)5ten6=(5+)5addfive=(5+)ten7=addfive5

List comprehensions

See List comprehension#Overview for the Haskell example.

Pattern matching

Pattern matching is used to match on the different constructors of algebraic data types. Here are some functions, each using pattern matching on each of the types below:

-- This type signature says that empty takes a list containing any type, and returns a Boolempty::[a]->Boolempty(x:xs)=Falseempty[]=True-- Will return a value from a Maybe a, given a default value in case a Nothing is encounteredfromMaybe::a->Maybea->afromMaybex(Justy)=yfromMaybexNothing=xisRight::Eitherab->BoolisRight(Right_)=TrueisRight(Left_)=FalsegetName::Person->StringgetName(Personname__)=namegetSex::Person->SexgetSex(Person_sex_)=sexgetAge::Person->IntgetAge(Person__age)=age

Using the above functions, along with the map function, we can apply them to each element of a list, to see their results:

mapempty[[1,2,3],[],[2],[1..]]-- returns [False,True,False,False]map(fromMaybe0)[Just2,Nothing,Just109238,Nothing]-- returns [2,0,109238,0]mapisRight[Left"hello",Right6,Right23,Left"world"]-- returns [False, True, True, False]mapgetName[Person"Sarah"Female20,Person"Alex"Male20,tom]-- returns ["Sarah", "Alex", "Tom"], using the definition for tom above

Tuples

Tuples in haskell can be used to hold a fixed number of elements. They are used to group pieces of data of differing types:

account::(String,Integer,Double)-- The type of a three-tuple, representing --   a name, balance, and interest rateaccount=("John Smith",102894,5.25)

Tuples are commonly used in the zip* functions to place adjacent elements in separate lists together in tuples (zip4 to zip7 are provided in the Data.List module):

-- The definition of the zip function. Other zip* functions are defined similarlyzip::[x]->[y]->[(x,y)]zip(x:xs)(y:ys)=(x,y):zipxsyszip__=[]zip[1..5]"hello"-- returns [(1,'h'),(2,'e'),(3,'l'),(4,'l'),(5,'o')]-- and has type [(Integer, Char)]zip3[1..5]"hello"[False,True,False,False,True]-- returns [(1,'h',False),(2,'e',True),(3,'l',False),(4,'l',False),(5,'o',True)]-- and has type [(Integer,Char,Bool)]

In the GHC compiler, tuples are defined with sizes from 2 elements up to 62 elements.

Namespaces

In the § More complex examples section above, calc is used in two senses, showing that there is a Haskell type class namespace and also a namespace for values:

  1. a Haskell type class for calc. The domain and range can be explicitly denoted in a Haskell type class.
  2. a Haskell value, formula, or expression for calc.

Typeclasses and polymorphism

Algebraic data types

Algebraic data types are used extensively in Haskell. Some examples of these are the built in list, Maybe and Either types:

-- A list of a's ([a]) is either an a consed (:) onto another list of a's, or an empty list ([])data[a]=a:[a]|[]-- Something of type Maybe a is either Just something, or NothingdataMaybea=Justa|Nothing-- Something of type Either atype btype is either a Left atype, or a Right btypedataEitherab=Lefta|Rightb

Users of the language can also define their own abstract data types. An example of an ADT used to represent a person's name, sex and age might look like:

dataSex=Male|FemaledataPerson=PersonStringSexInt-- Notice that Person is both a constructor and a type-- An example of creating something of type Persontom::Persontom=Person"Tom"Male27

Type system

Monads and input/output

ST monad

The ST monad allows writing imperative programming algorithms in Haskell, using mutable variables (STRefs) and mutable arrays (STArrays and STUArrays). The advantage of the ST monad is that it allows writing code that has internal side effects, such as destructively updating mutable variables and arrays, while containing these effects inside the monad. The result of this is that functions written using the ST monad appear pure to the rest of the program. This allows using imperative code where it may be impractical to write functional code, while still keeping all the safety that pure code provides.

Here is an example program (taken from the Haskell wiki page on the ST monad) that takes a list of numbers, and sums them, using a mutable variable:

importControl.Monad.STimportData.STRefimportControl.MonadsumST::Numa=>[a]->asumSTxs=runST$do-- runST takes stateful ST code and makes it pure.summed<-newSTRef0-- Create an STRef (a mutable variable)forM_xs$\x->do-- For each element of the argument list xs ..modifySTRefsummed(+x)-- add it to what we have in n.readSTRefsummed-- read the value of n, which will be returned by the runST above.

STM monad

The STM monad is an implementation of Software Transactional Memory in Haskell. It is implemented in the GHC compiler, and allows for mutable variables to be modified in transactions.

Arrows

As Haskell is a pure functional language, functions cannot have side effects. Being non-strict, it also does not have a well-defined evaluation order. This is a challenge for real programs, which among other things need to interact with an environment. Haskell solves this with monadic types that leverage the type system to ensure the proper sequencing of imperative constructs. The typical example is input/output (I/O), but monads are useful for many other purposes, including mutable state, concurrency and transactional memory, exception handling, and error propagation.

Haskell provides a special syntax for monadic expressions, so that side-effecting programs can be written in a style similar to current imperative programming languages; no knowledge of the mathematics behind monadic I/O is required for this. The following program reads a name from the command line and outputs a greeting message:

main=doputStrLn"What's your name?"name<-getLineputStr("Hello, "++name++"!\n")

The do-notation eases working with monads. This do-expression is equivalent to, but (arguably) easier to write and understand than, the de-sugared version employing the monadic operators directly:

main=putStrLn"What's your name?">>getLine>>=\name->putStr("Hello, "++name++"!\n")
See also wikibooks:Transwiki:List of hello world programs#Haskell for another example that prints text.

Concurrency

The Haskell language definition includes neither concurrency nor parallelism, although GHC supports both.

Concurrent Haskell is an extension to Haskell that supports threads and synchronization. [7] GHC's implementation of Concurrent Haskell is based on multiplexing lightweight Haskell threads onto a few heavyweight operating system (OS) threads, [8] so that Concurrent Haskell programs run in parallel via symmetric multiprocessing. The runtime can support millions of simultaneous threads. [9]

The GHC implementation employs a dynamic pool of OS threads, allowing a Haskell thread to make a blocking system call without blocking other running Haskell threads. [10] Hence the lightweight Haskell threads have the characteristics of heavyweight OS threads, and a programmer can be unaware of the implementation details.

Recently,[ when? ] Concurrent Haskell has been extended with support for software transactional memory (STM), which is a concurrency abstraction in which compound operations on shared data are performed atomically, as transactions. [11] GHC's STM implementation is the only STM implementation to date to provide a static compile-time guarantee preventing non-transactional operations from being performed within a transaction. The Haskell STM library also provides two operations not found in other STMs: retry and orElse, which together allow blocking operations to be defined in a modular and composable fashion.

Related Research Articles

In programming language theory, lazy evaluation, or call-by-need, is an evaluation strategy which delays the evaluation of an expression until its value is needed and which also avoids repeated evaluations.

ML is a functional programming language. It is known for its use of the polymorphic Hindley–Milner type system, which automatically assigns the data types of most expressions without requiring explicit type annotations, and ensures type safety; there is a formal proof that a well-typed ML program does not cause runtime type errors. ML provides pattern matching for function arguments, garbage collection, imperative programming, call-by-value and currying. While a general-purpose programming language, ML is used heavily in programming language research and is one of the few languages to be completely specified and verified using formal semantics. Its types and pattern matching make it well-suited and commonly used to operate on other formal languages, such as in compiler writing, automated theorem proving, and formal verification.

<span class="mw-page-title-main">Miranda (programming language)</span>

Miranda is a lazy, purely functional programming language designed by David Turner as a successor to his earlier programming languages SASL and KRC, using some concepts from ML and Hope. It was produced by Research Software Ltd. of England and was the first purely functional language to be commercially supported.

Standard ML (SML) is a general-purpose, modular, functional programming language with compile-time type checking and type inference. It is popular for writing compilers, for programming language research, and for developing theorem provers.

Oz is a multiparadigm programming language, developed in the Programming Systems Lab at Université catholique de Louvain, for programming language education. It has a canonical textbook: Concepts, Techniques, and Models of Computer Programming.

In computer science, pattern matching is the act of checking a given sequence of tokens for the presence of the constituents of some pattern. In contrast to pattern recognition, the match usually has to be exact: "either it will or will not be a match." The patterns generally have the form of either sequences or tree structures. Uses of pattern matching include outputting the locations of a pattern within a token sequence, to output some component of the matched pattern, and to substitute the matching pattern with some other token sequence.

Curry is a declarative programming language, an implementation of the functional logic programming paradigm, and based on the Haskell language. It merges elements of functional and logic programming, including constraint programming integration.

In computer science, a tagged union, also called a variant, variant record, choice type, discriminated union, disjoint union, sum type or coproduct, is a data structure used to hold a value that could take on several different, but fixed, types. Only one of the types can be in use at any one time, and a tag field explicitly indicates which one is in use. It can be thought of as a type that has several "cases", each of which should be handled correctly when that type is manipulated. This is critical in defining recursive datatypes, in which some component of a value may have the same type as that value, for example in defining a type for representing trees, where it is necessary to distinguish multi-node subtrees and leaves. Like ordinary unions, tagged unions can save storage by overlapping storage areas for each type, since only one is in use at a time.

In computer science, a generator is a routine that can be used to control the iteration behaviour of a loop. All generators are also iterators. A generator is very similar to a function that returns an array, in that a generator has parameters, can be called, and generates a sequence of values. However, instead of building an array containing all the values and returning them all at once, a generator yields the values one at a time, which requires less memory and allows the caller to get started processing the first few values immediately. In short, a generator looks like a function but behaves like an iterator.

In functional programming, a monad is a structure that combines program fragments (functions) and wraps their return values in a type with additional computation. In addition to defining a wrapping monadic type, monads define two operators: one to wrap a value in the monad type, and another to compose together functions that output values of the monad type. General-purpose languages use monads to reduce boilerplate code needed for common operations. Functional languages use monads to turn complicated sequences of functions into succinct pipelines that abstract away control flow, and side-effects.

In computer science, corecursion is a type of operation that is dual to recursion. Whereas recursion works analytically, starting on data further from a base case and breaking it down into smaller data and repeating until one reaches a base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing data further removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce, bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept is generative recursion which may lack a definite "direction" inherent in corecursion and recursion.

In computer science, a type class is a type system construct that supports ad hoc polymorphism. This is achieved by adding constraints to type variables in parametrically polymorphic types. Such a constraint typically involves a type class T and a type variable a, and means that a can only be instantiated to a type whose members support the overloaded operations associated with T.

In computer programming, append is the operation for concatenating linked lists or arrays in some high-level programming languages.

Tacit programming, also called point-free style, is a programming paradigm in which function definitions do not identify the arguments on which they operate. Instead the definitions merely compose other functions, among which are combinators that manipulate the arguments. Tacit programming is of theoretical interest, because the strict use of composition results in programs that are well adapted for equational reasoning. It is also the natural style of certain programming languages, including APL and its derivatives, and concatenative languages such as Forth. The lack of argument naming gives point-free style a reputation of being unnecessarily obscure, hence the epithet "pointless style".

In many programming languages, map is a higher-order function that applies a given function to each element of a collection, e.g. a list or set, returning the results in a collection of the same type. It is often called apply-to-all when considered in functional form.

In functional programming, fold refers to a family of higher-order functions that analyze a recursive data structure and through use of a given combining operation, recombine the results of recursively processing its constituent parts, building up a return value. Typically, a fold is presented with a combining function, a top node of a data structure, and possibly some default values to be used under certain conditions. The fold then proceeds to combine elements of the data structure's hierarchy, using the function in a systematic way.

In functional programming, a generalized algebraic data type is a generalization of parametric algebraic data types.

<span class="mw-page-title-main">Pure (programming language)</span> Functional programming language

Pure, successor to the equational language Q, is a dynamically typed, functional programming language based on term rewriting. It has facilities for user-defined operator syntax, macros, arbitrary-precision arithmetic, and compiling to native code through the LLVM. Pure is free and open-source software distributed (mostly) under the GNU Lesser General Public License version 3 or later.

Haskell is a general-purpose, statically-typed, purely functional programming language with type inference and lazy evaluation. Designed for teaching, research, and industrial applications, Haskell has pioneered a number of programming language features such as type classes, which enable type-safe operator overloading, and monadic input/output (IO). It is named after logician Haskell Curry. Haskell's main implementation is the Glasgow Haskell Compiler (GHC).

Idris is a purely-functional programming language with dependent types, optional lazy evaluation, and features such as a totality checker. Idris may be used as a proof assistant, but is designed to be a general-purpose programming language similar to Haskell.

References

  1. HaskellWiki: Type signatures as good style
  2. HaskellWiki: Pointfree
  3. "Prime numbers - HaskellWiki". www.haskell.org.
  4. "Prime numbers - HaskellWiki". www.haskell.org.
  5. O'Neill, Melissa E., "The Genuine Sieve of Eratosthenes", Journal of Functional Programming, Published online by Cambridge University Press 9 October 2008 doi : 10.1017/S0956796808007004, pp. 10, 11.
  6. "Prime numbers - HaskellWiki". www.haskell.org.
  7. Simon Peyton Jones, Andrew Gordon, and Sigbjorn Finne. Concurrent Haskell. ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (PoPL). 1996. (Some sections are out of date with respect to the current implementation.)
  8. Runtime Support for Multicore Haskell Archived 2010-07-05 at the Wayback Machine (Simon Marlow, Simon Peyton Jones, Satnam Singh) ICFP '09: Proceedings of the 14th ACM SIGPLAN international conference on Functional programming, Edinburgh, Scotland, August 2009
  9. "DEFUN 2009: Multicore Programming in Haskell Now!". 5 September 2009.
  10. Extending the Haskell Foreign Function Interface with Concurrency Archived 2010-07-03 at the Wayback Machine (Simon Marlow, Simon Peyton Jones, Wolfgang Thaller) Proceedings of the ACM SIGPLAN workshop on Haskell, pages 57--68, Snowbird, Utah, USA, September 2004
  11. Harris, Tim; Marlow, Simon; Peyton Jones, Simon; Herlihy, Maurice (2005). "Composable memory transactions". Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming. CiteSeerX   10.1.1.67.3686 .