Applicative functor

Last updated

In functional programming, an applicative functor, or an applicative for short, is an intermediate structure between functors and monads. In Category Theory they are called Closed Monoidal Functors. Applicative functors allow for functorial computations to be sequenced (unlike plain functors), but don't allow using results from prior computations in the definition of subsequent ones (unlike monads). Applicative functors are the programming equivalent of lax monoidal functors with tensorial strength in category theory.

Contents

Applicative functors were introduced in 2008 by Conor McBride and Ross Paterson in their paper Applicative programming with effects. [1]

Applicative functors first appeared as a library feature in Haskell, but have since spread to other languages as well, including Idris, Agda, OCaml, Scala and F#. Glasgow Haskell, Idris, and F# offer language features designed to ease programming with applicative functors. In Haskell, applicative functors are implemented in the Applicative type class.

While in languages like Haskell monads are applicative functors, it is not always so in general settings of Category Theory - examples of monads that are not strong can be found on Math Overflow.

Definition

In Haskell, an applicative is a parameterized type that we think of as being a container for data of the parameter type plus two methods pure and <*>. The pure method for an applicative of parameterized type f has type

pure::a->fa

and can be thought of as bringing values into the applicative. The <*> method for an applicative of type f has type

(<*>)::f(a->b)->fa->fb

and can be thought of as the equivalent of function application inside the applicative. [2]

Alternatively, instead of providing <*>, one may provide a function called liftA2. These two functions may be defined in terms of each other; therefore only one is needed for a minimally complete definition. [3]

Applicatives are also required to satisfy four equational laws: [3]

Every applicative is a functor. To be explicit, given the methods pure and <*>, fmap can be implemented as [3]

fmapgx=pureg<*>x

The commonly-used notation g<$>x is equivalent to pureg<*>x.

Examples

In Haskell, the Maybe type can be made an instance of the type class Applicative using the following definition: [2]

instanceApplicativeMaybewhere-- pure :: a -> Maybe apurea=Justa-- (<*>) :: Maybe (a -> b) -> Maybe a -> Maybe bNothing<*>_=Nothing_<*>Nothing=Nothing(Justg)<*>(Justx)=Just(gx)

As stated in the Definition section, pure turns an a into a Maybea, and <*> applies a Maybe function to a Maybe value. Using the Maybe applicative for type a allows one to operate on values of type a with the error being handled automatically by the applicative machinery. For example, to add m::MaybeInt and n::MaybeInt, one needs only write

(+)<$>m<*>n

For the non-error case, adding m=Justi and n=Justj gives Just(i+j). If either of m or n is Nothing, then the result will be Nothing also. This example also demonstrates how applicatives allow a sort of generalized function application.

See also

Related Research Articles

In mathematics and computer science, currying is the technique of translating a function that takes multiple arguments into a sequence of families of functions, each taking a single argument.

In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program.

Generic programming is a style of computer programming in which algorithms are written in terms of data types to-be-specified-later that are then instantiated when needed for specific types provided as parameters. This approach, pioneered by the ML programming language in 1973, permits writing common functions or types that differ only in the set of types on which they operate when used, thus reducing duplicate code.

In combinatorial mathematics, the theory of combinatorial species is an abstract, systematic method for deriving the generating functions of discrete structures, which allows one to not merely count these structures but give bijective proofs involving them. Examples of combinatorial species are graphs, permutations, trees, and so on; each of these has an associated generating function which counts how many structures there are of a certain size. One goal of species theory is to be able to analyse complicated structures by describing them in terms of transformations and combinations of simpler structures. These operations correspond to equivalent manipulations of generating functions, so producing such functions for complicated structures is much easier than with other methods. The theory was introduced, carefully elaborated and applied by Canadian researchers around André Joyal.

In category theory, a branch of mathematics, a monad is a monoid in the category of endofunctors of some fixed category. An endofunctor is a functor mapping a category to itself, and a monad is an endofunctor together with two natural transformations required to fulfill certain coherence conditions. Monads are used in the theory of pairs of adjoint functors, and they generalize closure operators on partially ordered sets to arbitrary categories. Monads are also useful in the theory of datatypes, the denotational semantics of imperative programming languages, and in functional programming languages, allowing languages without mutable state to do things such as simulate for-loops; see Monad.

In functional programming, a monad is a structure that combines program fragments (functions) and wraps their return values in a type with additional computation. In addition to defining a wrapping monadic type, monads define two operators: one to wrap a value in the monad type, and another to compose together functions that output values of the monad type. General-purpose languages use monads to reduce boilerplate code needed for common operations. Functional languages use monads to turn complicated sequences of functions into succinct pipelines that abstract away control flow, and side-effects.

In computer science, arrows or bolts are a type class used in programming to describe computations in a pure and declarative fashion. First proposed by computer scientist John Hughes as a generalization of monads, arrows provide a referentially transparent way of expressing relationships between logical steps in a computation. Unlike monads, arrows don't limit steps to having one and only one input. As a result, they have found use in functional reactive programming, point-free programming, and parsers among other applications.

This is a glossary of properties and concepts in category theory in mathematics.

In mathematics, specifically in category theory, hom-sets give rise to important functors to the category of sets. These functors are called hom-functors and have numerous applications in category theory and other branches of mathematics.

In computer science, a type class is a type system construct that supports ad hoc polymorphism. This is achieved by adding constraints to type variables in parametrically polymorphic types. Such a constraint typically involves a type class T and a type variable a, and means that a can only be instantiated to a type whose members support the overloaded operations associated with T.

String diagrams are a formal graphical language for representing morphisms in monoidal categories, or more generally 2-cells in 2-categories. They are a prominent tool in applied category theory. When interpreted in the monoidal category of vector spaces and linear maps with the tensor product, string diagrams are called tensor networks or Penrose graphical notation. This has led to the development of categorical quantum mechanics where the axioms of quantum theory are expressed in the language of monoidal categories.

In functional programming, the concept of catamorphism denotes the unique homomorphism from an initial algebra into some other algebra.

In category theory, a monoidal monad is a monad on a monoidal category such that the functor is a lax monoidal functor and the natural transformations and are monoidal natural transformations. In other words, is equipped with coherence maps and satisfying certain properties, and the unit and multiplication are monoidal natural transformations. By monoidality of , the morphisms and are necessarily equal.

In many programming languages, map is a higher-order function that applies a given function to each element of a collection, e.g. a list or set, returning the results in a collection of the same type. It is often called apply-to-all when considered in functional form.

In computer programming, an anamorphism is a function that generates a sequence by repeated application of the function to its previous result. You begin with some value A and apply a function f to it to get B. Then you apply f to B to get C, and so on until some terminating condition is reached. The anamorphism is the function that generates the list of A, B, C, etc. You can think of the anamorphism as unfolding the initial value into a sequence.

In mathematics and computer science, apply is a function that applies a function to arguments. It is central to programming languages derived from lambda calculus, such as LISP and Scheme, and also in functional languages. It has a role in the study of the denotational semantics of computer programs, because it is a continuous function on complete partial orders. Apply is also a continuous function in homotopy theory, and, indeed underpins the entire theory: it allows a homotopy deformation to be viewed as a continuous path in the space of functions. Likewise, valid mutations (refactorings) of computer programs can be seen as those that are "continuous" in the Scott topology.

In programming languages and type theory, an option type or maybe type is a polymorphic type that represents encapsulation of an optional value; e.g., it is used as the return type of functions which may or may not return a meaningful value when they are applied. It consists of a constructor which either is empty, or which encapsulates the original data type A.

This article describes the features in the programming language Haskell.

Idris is a purely-functional programming language with dependent types, optional lazy evaluation, and features such as a totality checker. Idris may be used as a proof assistant, but is designed to be a general-purpose programming language similar to Haskell.

<span class="mw-page-title-main">Functor (functional programming)</span> Design pattern in pure functional programming

In functional programming, a functor is a design pattern inspired by the definition from category theory that allows one to apply a function to values inside a generic type without changing the structure of the generic type. In Haskell this idea can be captured in a type class:

References

  1. McBride, Conor; Paterson, Ross (2008-01-01). "Applicative programming with effects". Journal of Functional Programming. 18 (1): 1–13. CiteSeerX   10.1.1.114.1555 . doi:10.1017/S0956796807006326. ISSN   1469-7653.
  2. 1 2 Hutton, Graham (2016). Programming in Haskell (2 ed.). pp. 157–163.
  3. 1 2 3 "Control.Applicative".