In functional programming, a monad is a structure that combines program fragments (functions) and wraps their return values in a type with additional computation. In addition to defining a wrapping monadic type, monads define two operators: one to wrap a value in the monad type, and another to compose together functions that output values of the monad type (these are known as monadic functions). General-purpose languages use monads to reduce boilerplate code needed for common operations (such as dealing with undefined values or fallible functions, or encapsulating bookkeeping code). Functional languages use monads to turn complicated sequences of functions into succinct pipelines that abstract away control flow, and side-effects. [1] [2]
Both the concept of a monad and the term originally come from category theory, where a monad is defined as a functor with additional structure. [a] Research beginning in the late 1980s and early 1990s established that monads could bring seemingly disparate computer-science problems under a unified, functional model. Category theory also provides a few formal requirements, known as the monad laws , which should be satisfied by any monad and can be used to verify monadic code. [3] [4]
Since monads make semantics explicit for a kind of computation, they can also be used to implement convenient language features. Some languages, such as Haskell, even offer pre-built definitions in their core libraries for the general monad structure and common instances. [1] [5]
"For a monad m
, a value of type m a
represents having access to a value of type a
within the context of the monad." —C. A. McCann [6]
More exactly, a monad can be used where unrestricted access to a value is inappropriate for reasons specific to the scenario. In the case of the Maybe monad, it is because the value may not exist. In the case of the IO monad, it is because the value may not be known yet, such as when the monad represents user input that will only be provided after a prompt is displayed. In all cases the scenarios in which access makes sense are captured by the bind operation defined for the monad; for the Maybe monad a value is bound only if it exists, and for the IO monad a value is bound only after the previous operations in the sequence have been performed.
A monad can be created by defining a type constructor M and two operations:
return :: a -> M a
(often also called unit), which receives a value of type a
and wraps it into a monadic value of type M a
, andbind :: (M a) -> (a -> M b) -> (M b)
(typically represented as >>=
), which receives a monadic value M a
and a function f
that accepts values of the base type a
. Bind unwraps a
, applies f
to it, and can process the result of f
as a monadic value M b
.(An alternative but equivalent construct using the join
function instead of the bind
operator can be found in the later section § Derivation from functors .)
With these elements, the programmer composes a sequence of function calls (a "pipeline") with several bind operators chained together in an expression. Each function call transforms its input plain-type value, and the bind operator handles the returned monadic value, which is fed into the next step in the sequence.
Typically, the bind operator >>=
may contain code unique to the monad that performs additional computation steps not available in the function received as a parameter. Between each pair of composed function calls, the bind operator can inject into the monadic value m a
some additional information that is not accessible within the function f
, and pass it along down the pipeline. It can also exert finer control of the flow of execution, for example by calling the function only under some conditions, or executing the function calls in a particular order.
One example of a monad is the Maybe
type. Undefined null results are one particular pain point that many procedural languages don't provide specific tools for dealing with, requiring use of the null object pattern or checks to test for invalid values at each operation to handle undefined values. This causes bugs and makes it harder to build robust software that gracefully handles errors. The Maybe
type forces the programmer to deal with these potentially undefined results by explicitly defining the two states of a result: Just ⌑result⌑
, or Nothing
. For example the programmer might be constructing a parser, which is to return an intermediate result, or else signal a condition which the parser has detected, and which the programmer must also handle. With just a little extra functional spice on top, this Maybe
type transforms into a fully-featured monad. [b] : 12.3 pages 148–151
In most languages, the Maybe monad is also known as an option type, which is just a type that marks whether or not it contains a value. Typically they are expressed as some kind of enumerated type. In this Rust example we will call it Maybe<T>
and variants of this type can either be a value of generic type T
, or the empty variant: Nothing
.
// The <T> represents a generic type "T"enumMaybe<T>{Just(T),Nothing,}
Maybe<T>
can also be understood as a "wrapping" type, and this is where its connection to monads comes in. In languages with some form of the Maybe
type, there are functions that aid in their use such as composing monadic functions with each other and testing if a Maybe
contains a value.
In the following hard-coded example, a Maybe
type is used as a result of functions that may fail, in this case the type returns nothing if there is a divide-by-zero.
fndivide(x: Decimal,y: Decimal)-> Maybe<Decimal>{ify==0{returnNothing}else{returnJust(x/y)}}// divide(1.0, 4.0) -> returns Just(0.25)// divide(3.0, 0.0) -> returns Nothing
One such way to test whether or not a Maybe
contains a value is to use if
statements.
letm_x=divide(3.14,0.0);// see divide function above// The if statement extracts x from m_x if m_x is the Just variant of MaybeifletJust(x)=m_x{println!("answer: ",x)}else{println!("division failed, divide by zero error...")}
Other languages may have pattern matching
letresult=divide(3.0,2.0);matchresult{Just(x)=>println!("Answer: ",x),Nothing=>println!("division failed; we'll get 'em next time."),}
Monads can compose functions that return Maybe
, putting them together. A concrete example might have one function take in several Maybe
parameters, and return a single Maybe
whose value is Nothing
when any of the parameters are Nothing
, as in the following:
fnchainable_division(maybe_x: Maybe<Decimal>,maybe_y: Maybe<Decimal>)-> Maybe<Decimal>{match(maybe_x,maybe_y){(Just(x),Just(y))=>{// If both inputs are Just, check for division by zero and divide accordinglyify==0{returnNothing}else{returnJust(x/y)}},_=>returnNothing// Otherwise return Nothing}}chainable_division(chainable_division(Just(2.0),Just(0.0)),Just(1.0));// inside chainable_division fails, outside chainable_division returns Nothing
Instead of repeating Just
expressions, we can use something called a bind operator. (also known as "map", "flatmap", or "shove" [8] : 2205s ). This operation takes a monad and a function that returns a monad and runs the function on the inner value of the passed monad, returning the monad from the function.
// Rust example using ".map". maybe_x is passed through 2 functions that return Maybe<Decimal> and Maybe<String> respectively.// As with normal function composition the inputs and outputs of functions feeding into each other should match wrapped types. (i.e. the add_one function should return a Maybe<Decimal> which then can be unwrapped to a Decimal for the decimal_to_string function)letmaybe_x: Maybe<Decimal>=Just(1.0)letmaybe_result=maybe_x.map(add_one).map(decimal_to_string)
In Haskell, there is an operator bind, or (>>=
) that allows for this monadic composition in a more elegant form similar to function composition. [c] : 150–151
halve::Int->MaybeInthalvex|evenx=Just(x`div`2)|oddx=Nothing-- This code halves x twice. it evaluates to Nothing if x is not a multiple of 4halvex>>=halve
With >>=
available, chainable_division
can be expressed much more succinctly with the help of anonymous functions (i.e. lambdas). Notice in the expression below how the two nested lambdas each operate on the wrapped value in the passed Maybe
monad using the bind operator. [d] : 93
chainable_division(mx,my)=mx>>=(λx->my>>=(λy->Just(x/y)))
What has been shown so far is basically a monad, but to be more concise, the following is a strict list of qualities necessary for a monad as defined by the following section.
Maybe
) [b] : 148–151 Just(x)
) [d] : 93 >>=
or .flatMap()
) [c] : 150–151 These are the 3 things necessary to form a monad. Other monads may embody different logical processes, and some may have additional properties, but all of them will have these three similar components. [1] [9]
The more common definition for a monad in functional programming, used in the above example, is actually based on a Kleisli triple ⟨T, η, μ⟩ rather than category theory's standard definition. The two constructs turn out to be mathematically equivalent, however, so either definition will yield a valid monad. Given any well-defined basic types T and U, a monad consists of three parts:
unit : T → M T
[f] >>=
or a method called flatMap, that unwraps a monadic variable, then inserts it into a monadic function/expression, resulting in a new monadic value:To fully qualify as a monad though, these three parts must also respect a few laws:
unit(x) >>= f
↔f(x)
ma >>= unit
↔ma
Algebraically, this means any monad both gives rise to a category (called the Kleisli category) and a monoid in the category of functors (from values to computations), with monadic composition as a binary operator in the monoid [8] : 2450s and unit as identity in the monoid.
The value of the monad pattern goes beyond merely condensing code and providing a link to mathematical reasoning. Whatever language or default programming paradigm a developer uses, following the monad pattern brings many of the benefits of purely functional programming. By reifying a specific kind of computation, a monad not only encapsulates the tedious details of that computational pattern, but it does so in a declarative way, improving the code's clarity. As monadic values explicitly represent not only computed values, but computed effects, a monadic expression can be replaced with its value in referentially transparent positions, much like pure expressions can be, allowing for many techniques and optimizations based on rewriting. [4]
Typically, programmers will use bind to chain monadic functions into a sequence, which has led some to describe monads as "programmable semicolons", a reference to how many imperative languages use semicolons to separate statements. [1] [5] However, monads do not actually order computations; even in languages that use them as central features, simpler function composition can arrange steps within a program. A monad's general utility rather lies in simplifying a program's structure and improving separation of concerns through abstraction. [4] [11]
The monad structure can also be seen as a uniquely mathematical and compile time variation on the decorator pattern. Some monads can pass along extra data that is inaccessible to functions, and some even exert finer control over execution, for example only calling a function under certain conditions. Because they let application programmers implement domain logic while offloading boilerplate code onto pre-developed modules, monads can even be considered a tool for aspect-oriented programming. [12]
One other noteworthy use for monads is isolating side-effects, like input/output or mutable state, in otherwise purely functional code. Even purely functional languages can still implement these "impure" computations without monads, via an intricate mix of function composition and continuation-passing style (CPS) in particular. [2] With monads though, much of this scaffolding can be abstracted away, essentially by taking each recurring pattern in CPS code and bundling it into a distinct monad. [4]
If a language does not support monads by default, it is still possible to implement the pattern, often without much difficulty. When translated from category-theory to programming terms, the monad structure is a generic concept and can be defined directly in any language that supports an equivalent feature for bounded polymorphism. A concept's ability to remain agnostic about operational details while working on underlying types is powerful, but the unique features and stringent behavior of monads set them apart from other concepts. [13]
Discussions of specific monads will typically focus on solving a narrow implementation problem since a given monad represents a specific computational form. In some situations though, an application can even meet its high-level goals by using appropriate monads within its core logic.
Here are just a few applications that have monads at the heart of their designs:
The term "monad" in programming dates to the APL and J programming languages, which do tend toward being purely functional. However, in those languages, "monad" is only shorthand for a function taking one parameter (a function with two parameters being a "dyad", and so on). [19]
The mathematician Roger Godement was the first to formulate the concept of a monad (dubbing it a "standard construction") in the late 1950s, though the term "monad" that came to dominate was popularized by category-theorist Saunders Mac Lane.[ citation needed ] The form defined above using bind, however, was originally described in 1965 by mathematician Heinrich Kleisli in order to prove that any monad could be characterized as an adjunction between two (covariant) functors. [20]
Starting in the 1980s, a vague notion of the monad pattern began to surface in the computer science community. According to programming language researcher Philip Wadler, computer scientist John C. Reynolds anticipated several facets of it in the 1970s and early 1980s, when he discussed the value of continuation-passing style, of category theory as a rich source for formal semantics, and of the type distinction between values and computations. [4] The research language Opal, which was actively designed up until 1990, also effectively based I/O on a monadic type, but the connection was not realized at the time. [21]
The computer scientist Eugenio Moggi was the first to explicitly link the monad of category theory to functional programming, in a conference paper in 1989, [22] followed by a more refined journal submission in 1991. In earlier work, several computer scientists had advanced using category theory to provide semantics for the lambda calculus. Moggi's key insight was that a real-world program is not just a function from values to other values, but rather a transformation that forms computations on those values. When formalized in category-theoretic terms, this leads to the conclusion that monads are the structure to represent these computations. [3]
Several others popularized and built on this idea, including Philip Wadler and Simon Peyton Jones, both of whom were involved in the specification of Haskell. In particular, Haskell used a problematic "lazy stream" model up through v1.2 to reconcile I/O with lazy evaluation, until switching over to a more flexible monadic interface. [23] The Haskell community would go on to apply monads to many problems in functional programming, and in the 2010s, researchers working with Haskell eventually recognized that monads are applicative functors; [24] [i] and that both monads and arrows are monoids. [26]
At first, programming with monads was largely confined to Haskell and its derivatives, but as functional programming has influenced other paradigms, many languages have incorporated a monad pattern (in spirit if not in name). Formulations now exist in Scheme, Perl, Python, Racket, Clojure, Scala, F#, and have also been considered for a new ML standard.[ citation needed ]
One benefit of the monad pattern is bringing mathematical precision on the composition of computations. Not only can the monad laws be used to check an instance's validity, but features from related structures (like functors) can be used through subtyping.
Returning to the Maybe
example, its components were declared to make up a monad, but no proof was given that it satisfies the monad laws.
This can be rectified by plugging the specifics of Maybe
into one side of the general laws, then algebraically building a chain of equalities to reach the other side:
Law 1: eta(a) >>= f(x) ⇔ (Just a) >>= f(x) ⇔ f(a)
Law 2: ma >>= eta(x) ⇔ ma if ma is (Just a) then eta(a) ⇔ Just a elseor Nothing ⇔ Nothing end if
Law 3:(ma >>= f(x)) >>= g(y) ⇔ ma >>= (f(x) >>= g(y))if (ma >>= f(x)) is (Just b) thenif ma is (Just a) then g(ma >>= f(x)) (f(x) >>= g(y)) a elseelse Nothing Nothing end ifend if ⇔ if ma is (Just a) and f(a) is (Just b) then (g ∘ f) a else if ma is (Just a) and f(a) is Nothing then Nothing else Nothing end if
Though rarer in computer science, one can use category theory directly, which defines a monad as a functor with two additional natural transformations. [j] So to begin, a structure requires a higher-order function (or "functional") named map to qualify as a functor:
map : (a → b) → (ma → mb)
This is not always a major issue, however, especially when a monad is derived from a pre-existing functor, whereupon the monad inherits map automatically. (For historical reasons, this map
is instead called fmap
in Haskell.)
A monad's first transformation is actually the same unit from the Kleisli triple, but following the hierarchy of structures closely, it turns out unit characterizes an applicative functor, an intermediate structure between a monad and a basic functor. In the applicative context, unit is sometimes referred to as pure but is still the same function. What does differ in this construction is the law unit must satisfy; as bind is not defined, the constraint is given in terms of map instead:
(unit ∘ φ) x ↔ ((map φ) ∘ unit) x ↔ x
[27] The final leap from applicative functor to monad comes with the second transformation, the join function (in category theory this is a natural transformation usually called μ), which "flattens" nested applications of the monad:
join(mma) : M (M T) → M T
As the characteristic function, join must also satisfy three variations on the monad laws:
(join ∘ (map join)) mmma ↔ (join ∘ join) mmma ↔ ma
(join ∘ (map unit)) ma ↔ (join ∘ unit) ma ↔ ma
(join ∘ (map map φ)) mma ↔ ((map φ) ∘ join) mma ↔ mb
Regardless of whether a developer defines a direct monad or a Kleisli triple, the underlying structure will be the same, and the forms can be derived from each other easily:
(map φ) ma ↔ ma >>= (unit ∘ φ)
join(mma) ↔ mma >>= id
ma >>= f ↔ (join ∘ (map f)) ma
[28] The List monad naturally demonstrates how deriving a monad from a simpler functor can come in handy. In many languages, a list structure comes pre-defined along with some basic features, so a List
type constructor and append operator (represented with ++
for infix notation) are assumed as already given here.
Embedding a plain value in a list is also trivial in most languages:
unit(x) = [x]
From here, applying a function iteratively with a list comprehension may seem like an easy choice for bind and converting lists to a full monad. The difficulty with this approach is that bind expects monadic functions, which in this case will output lists themselves; as more functions are applied, layers of nested lists will accumulate, requiring more than a basic comprehension.
However, a procedure to apply any simple function over the whole list, in other words map, is straightforward:
(map φ) xlist = [ φ(x1), φ(x2), ..., φ(xn) ]
Now, these two procedures already promote List
to an applicative functor. To fully qualify as a monad, only a correct notion of join to flatten repeated structure is needed, but for lists, that just means unwrapping an outer list to append the inner ones that contain values:
join(xlistlist) = join([xlist1, xlist2, ..., xlistn]) = xlist1 ++ xlist2 ++ ... ++ xlistn
The resulting monad is not only a list, but one that automatically resizes and condenses itself as functions are applied. bind can now also be derived with just a formula, then used to feed List
values through a pipeline of monadic functions:
(xlist >>= f) = join ∘ (map f) xlist
One application for this monadic list is representing nondeterministic computation. List
can hold results for all execution paths in an algorithm, then condense itself at each step to "forget" which paths led to which results (a sometimes important distinction from deterministic, exhaustive algorithms).[ citation needed ] Another benefit is that checks can be embedded in the monad; specific paths can be pruned transparently at their first point of failure, with no need to rewrite functions in the pipeline. [28]
A second situation where List
shines is composing multivalued functions. For instance, the nth complex root of a number should yield n distinct complex numbers, but if another mth root is then taken of those results, the final m•n values should be identical to the output of the m•nth root. List
completely automates this issue away, condensing the results from each step into a flat, mathematically correct list. [29]
Monads present opportunities for interesting techniques beyond just organizing program logic. Monads can lay the groundwork for useful syntactic features while their high-level and mathematical nature enable significant abstraction.
Although using bind openly often makes sense, many programmers prefer a syntax that mimics imperative statements (called do-notation in Haskell, perform-notation in OCaml, computation expressions in F#, [30] and for comprehension in Scala). This is only syntactic sugar that disguises a monadic pipeline as a code block; the compiler will then quietly translate these expressions into underlying functional code.
Translating the add
function from the Maybe
into Haskell can show this feature in action. A non-monadic version of add
in Haskell looks like this:
addmxmy=casemxofNothing->NothingJustx->casemyofNothing->NothingJusty->Just(x+y)
In monadic Haskell, return
is the standard name for unit, plus lambda expressions must be handled explicitly, but even with these technicalities, the Maybe
monad makes for a cleaner definition:
addmxmy=mx>>=(\x->my>>=(\y->return(x+y)))
With do-notation though, this can be distilled even further into a very intuitive sequence:
addmxmy=dox<-mxy<-myreturn(x+y)
A second example shows how Maybe
can be used in an entirely different language: F#. With computation expressions, a "safe division" function that returns None
for an undefined operand or division by zero can be written as:
letreadNum()=lets=Console.ReadLine()letsucc,v=Int32.TryParse(s)if(succ)thenSome(v)elseNoneletsecure_div=maybe{let!x=readNum()let!y=readNum()if(y=0)thenNoneelsereturn(x/y)}
At build-time, the compiler will internally "de-sugar" this function into a denser chain of bind calls:
maybe.Delay(fun()->maybe.Bind(readNum(),funx->maybe.Bind(readNum(),funy->if(y=0)thenNoneelsemaybe.Return(x/y))))
For a last example, even the general monad laws themselves can be expressed in do-notation:
do{x<-returnv;fx}==do{fv}do{x<-m;returnx}==do{m}do{y<-do{x<-m;fx};gy}==do{x<-m;y<-fx;gy}
Every monad needs a specific implementation that meets the monad laws, but other aspects like the relation to other structures or standard idioms within a language are shared by all monads. As a result, a language or library may provide a general Monad
interface with function prototypes, subtyping relationships, and other general facts. Besides providing a head-start to development and guaranteeing a new monad inherits features from a supertype (such as functors), checking a monad's design against the interface adds another layer of quality control.[ citation needed ]
Monadic code can often be simplified even further through the judicious use of operators. The map functional can be especially helpful since it works on more than just ad-hoc monadic functions; so long as a monadic function should work analogously to a predefined operator, map can be used to instantly "lift" the simpler operator into a monadic one. [k] With this technique, the definition of add
from the Maybe
example could be distilled into:
add(mx,my) = map (+)
The process could be taken even one step further by defining add
not just for Maybe
, but for the whole Monad
interface. By doing this, any new monad that matches the structure interface and implements its own map will immediately inherit a lifted version of add
too. The only change to the function needed is generalizing the type signature:
add : (Monad Number, Monad Number) → Monad Number [31]
Another monadic operator that is also useful for analysis is monadic composition (represented as infix >=>
here), which allows chaining monadic functions in a more mathematical style:
(f >=> g)(x) = f(x) >>= g
With this operator, the monad laws can be written in terms of functions alone, highlighting the correspondence to associativity and existence of an identity:
(unit >=> g) ↔ g (f >=> unit) ↔ f (f >=> g) >=> h ↔ f >=> (g >=> h) [1]
In turn, the above shows the meaning of the "do" block in Haskell:
do _p <- f(x) _q <- g(_p) h(_q) ↔ ( f >=> g >=> h )(x)
The simplest monad is the Identity monad, which just annotates plain values and functions to satisfy the monad laws:
newtype Id T = T unit(x) = x (x >>= f) = f(x)
Identity
does actually have valid uses though, such as providing a base case for recursive monad transformers. It can also be used to perform basic variable assignment within an imperative-style block. [l] [ citation needed ]
Any collection with a proper append is already a monoid, but it turns out that List
is not the only collection that also has a well-defined join and qualifies as a monad. One can even mutate List
into these other monadic collections by simply imposing special properties on append: [m] [n]
Collection | Monoid properties | Combinatoric properties | |||
---|---|---|---|---|---|
Commutative? | Idempotent? | Details | Ordered? | Unique items? | |
List | No | No | Free monoid | Yes | No |
Finite multiset | Yes | No | No | No | |
Finite set | Yes | Yes | No | Yes |
As already mentioned, pure code should not have unmanaged side effects, but that does not preclude a program from explicitly describing and managing effects. This idea is central to Haskell's IO monad, where an object of type IO a
can be seen as describing an action to be performed in the world, optionally providing information about the world of type a
. An action that provides no information about the world has the type IO ()
, "providing" the dummy value ()
. When a programmer binds an IO
value to a function, the function computes the next action to be performed based on the information about the world provided by the previous action (input from users, files, etc.). [23] Most significantly, because the value of the IO monad can only be bound to a function that computes another IO monad, the bind function imposes a discipline of a sequence of actions where the result of an action can only be provided to a function that will compute the next action to perform. This means that actions which do not need to be performed never are, and actions that do need to be performed have a well defined sequence, solving the problem of (IO) actions not being referentially transparent.
For example, Haskell has several functions for acting on the wider file system, including one that checks whether a file exists and another that deletes a file. Their two type signatures are:
doesFileExist::FilePath->IOBoolremoveFile::FilePath->IO()
The first is interested in whether a given file really exists, and as a result, outputs a Boolean value within the IO
monad. The second function, on the other hand, is only concerned with acting on the file system so the IO
container it outputs is empty.
IO
is not limited just to file I/O though; it even allows for user I/O, and along with imperative syntax sugar, can mimic a typical "Hello, World!" program:
main::IO()main=doputStrLn"Hello, world!"putStrLn"What is your name, user?"name<-getLineputStrLn("Nice to meet you, "++name++"!")
Desugared, this translates into the following monadic pipeline (>>
in Haskell is just a variant of bind for when only monadic effects matter and the underlying result can be discarded):
main::IO()main=putStrLn"Hello, world!">>putStrLn"What is your name, user?">>getLine>>=(\name->putStrLn("Nice to meet you, "++name++"!"))
Another common situation is keeping a log file or otherwise reporting a program's progress. Sometimes, a programmer may want to log even more specific, technical data for later profiling or debugging. The Writer monad can handle these tasks by generating auxiliary output that accumulates step-by-step.
To show how the monad pattern is not restricted to primarily functional languages, this example implements a Writer
monad in JavaScript. First, an array (with nested tails) allows constructing the Writer
type as a linked list. The underlying output value will live in position 0 of the array, and position 1 will implicitly hold a chain of auxiliary notes:
constwriter=value=>[value,[]];
Defining unit is also very simple:
constunit=value=>[value,[]];
Only unit is needed to define simple functions that output Writer
objects with debugging notes:
constsquared=x=>[x*x,[`${x} was squared.`]];consthalved=x=>[x/2,[`${x} was halved.`]];
A true monad still requires bind, but for Writer
, this amounts simply to concatenating a function's output to the monad's linked list:
constbind=(writer,transform)=>{const[value,log]=writer;const[result,updates]=transform(value);return[result,log.concat(updates)];};
The sample functions can now be chained together using bind, but defining a version of monadic composition (called pipelog
here) allows applying these functions even more succinctly:
constpipelog=(writer,...transforms)=>transforms.reduce(bind,writer);
The final result is a clean separation of concerns between stepping through computations and logging them to audit later:
pipelog(unit(4),squared,halved);// Resulting writer object = [8, ['4 was squared.', '16 was halved.']]
An environment monad (also called a reader monad and a function monad) allows a computation to depend on values from a shared environment. The monad type constructor maps a type T to functions of type E → T, where E is the type of the shared environment. The monad functions are:
The following monadic operations are useful:
The ask operation is used to retrieve the current context, while local executes a computation in a modified subcontext. As in a state monad, computations in the environment monad may be invoked by simply providing an environment value and applying it to an instance of the monad.
Formally, a value in an environment monad is equivalent to a function with an additional, anonymous argument; return and bind are equivalent to the K and S combinators, respectively, in the SKI combinator calculus.
A state monad allows a programmer to attach state information of any type to a calculation. Given any value type, the corresponding type in the state monad is a function which accepts a state, then outputs a new state (of type s
) along with a return value (of type t
). This is similar to an environment monad, except that it also returns a new state, and thus allows modeling a mutable environment.
typeStatest=s->(t,s)
Note that this monad takes a type parameter, the type of the state information. The monad operations are defined as follows:
-- "return" produces the given value without changing the state.returnx=\s->(x,s)-- "bind" modifies m so that it applies f to its result.m>>=f=\r->let(x,s)=mrin(fx)s
Useful state operations include:
get=\s->(s,s)-- Examine the state at this point in the computation.puts=\_->((),s)-- Replace the state.modifyf=\s->((),fs)-- Update the state
Another operation applies a state monad to a given initial state:
runState::Statesa->s->(a,s)runStatets=ts
do-blocks in a state monad are sequences of operations that can examine and update the state data.
Informally, a state monad of state type S maps the type of return values T into functions of type , where S is the underlying state. The return and bind function are:
From the category theory point of view, a state monad is derived from the adjunction between the product functor and the exponential functor, which exists in any cartesian closed category by definition.
A continuation monad [o] with return type R maps type T into functions of type . It is used to model continuation-passing style. The return and bind functions are as follows:
The call-with-current-continuation function is defined as follows:
The following code is pseudocode. Suppose we have two functions foo
and bar
, with types
foo:int->intbar:int->int
That is, both functions take in an integer and return another integer. Then we can apply the functions in succession like so:
foo(barx)
Where the result is the result of foo
applied to the result of bar
applied to x
.
But suppose we are debugging our program, and we would like to add logging messages to foo
and bar
. So we change the types as so:
foo:int->int*stringbar:int->int*string
So that both functions return a tuple, with the result of the application as the integer, and a logging message with information about the applied function and all the previously applied functions as the string.
Unfortunately, this means we can no longer compose foo
and bar
, as their input type int
is not compatible with their output type int * string
. And although we can again gain composability by modifying the types of each function to be int * string -> int * string
, this would require us to add boilerplate code to each function to extract the integer from the tuple, which would get tedious as the number of such functions increases.
Instead, let us define a helper function to abstract away this boilerplate for us:
bind:int*string->(int->int*string)->int*string
bind
takes in an integer and string tuple, then takes in a function (like foo
) that maps from an integer to an integer and string tuple. Its output is an integer and string tuple, which is the result of applying the input function to the integer within the input integer and string tuple. In this way, we only need to write boilerplate code to extract the integer from the tuple once, in bind
.
Now we have regained some composability. For example:
bind(bind(x,s)bar)foo
Where (x,s)
is an integer and string tuple. [p]
To make the benefits even clearer, let us define an infix operator as an alias for bind
:
(>>=):int*string->(int->int*string)->int*string
So that t >>= f
is the same as bind t f
.
Then the above example becomes:
((x,s)>>=bar)>>=foo
Finally, we define a new function to avoid writing (x, "")
every time we wish to create an empty logging message, where ""
is the empty string.
return:int->int*string
Which wraps x
in the tuple described above.
The result is a pipeline for logging messages:
((returnx)>>=bar)>>=foo
That allows us to more easily log the effects of bar
and foo
on x
.
int * string
denotes a pseudo-coded monadic value. [p] bind
and return
are analogous to the corresponding functions of the same name. In fact, int * string
, bind
, and return
form a monad.
An additive monad is a monad endowed with an additional closed, associative, binary operator mplus and an identity element under mplus, called mzero. The Maybe
monad can be considered additive, with Nothing
as mzero and a variation on the OR operator as mplus. List
is also an additive monad, with the empty list []
acting as mzero and the concatenation operator ++
as mplus.
Intuitively, mzero represents a monadic wrapper with no value from an underlying type, but is also considered a "zero" (rather than a "one") since it acts as an absorber for bind, returning mzero whenever bound to a monadic function. This property is two-sided, and bind will also return mzero when any value is bound to a monadic zero function.
In category-theoretic terms, an additive monad qualifies once as a monoid over monadic functions with bind (as all monads do), and again over monadic values via mplus. [32] [q]
Sometimes, the general outline of a monad may be useful, but no simple pattern recommends one monad or another. This is where a free monad comes in; as a free object in the category of monads, it can represent monadic structure without any specific constraints beyond the monad laws themselves. Just as a free monoid concatenates elements without evaluation, a free monad allows chaining computations with markers to satisfy the type system, but otherwise imposes no deeper semantics itself.
For example, by working entirely through the Just
and Nothing
markers, the Maybe
monad is in fact a free monad. The List
monad, on the other hand, is not a free monad since it brings extra, specific facts about lists (like append) into its definition. One last example is an abstract free monad:
dataFreefa=Purea|Free(f(Freefa))unit::a->Freefaunitx=Purexbind::Functorf=>Freefa->(a->Freefb)->Freefbbind(Purex)f=fxbind(Freex)f=Free(fmap(\y->bindyf)x)
Free monads, however, are not restricted to a linked-list like in this example, and can be built around other structures like trees.
Using free monads intentionally may seem impractical at first, but their formal nature is particularly well-suited for syntactic problems. A free monad can be used to track syntax and type while leaving semantics for later, and has found use in parsers and interpreters as a result. [33] Others have applied them to more dynamic, operational problems too, such as providing iteratees within a language. [34]
Besides generating monads with extra properties, for any given monad, one can also define a comonad. Conceptually, if monads represent computations built up from underlying values, then comonads can be seen as reductions back down to values. Monadic code, in a sense, cannot be fully "unpacked"; once a value is wrapped within a monad, it remains quarantined there along with any side-effects (a good thing in purely functional programming). Sometimes though, a problem is more about consuming contextual data, which comonads can model explicitly.
Technically, a comonad is the categorical dual of a monad, which loosely means that it will have the same required components, only with the direction of the type signatures reversed. Starting from the bind-centric monad definition, a comonad consists of:
counit(wa) : W T → T
=>>
) that extends a chain of reducing functions:(wa =>> f) : (W U, W U → T) → W T [r]
extend and counit must also satisfy duals of the monad laws:
counit ∘ ( (wa =>> f) → wb ) ↔ f(wa) → b wa =>> counit ↔ wa wa ( (=>> f(wx = wa)) → wb (=>> g(wy = wb)) → wc ) ↔ ( wa (=>> f(wx = wa)) → wb ) (=>> g(wy = wb)) → wc
Analogous to monads, comonads can also be derived from functors using a dual of join:
duplicate(wa) : W T → W (W T)
While operations like extend are reversed, however, a comonad does not reverse functions it acts on, and consequently, comonads are still functors with map, not cofunctors. The alternate definition with duplicate, counit, and map must also respect its own comonad laws:
((map duplicate) ∘ duplicate) wa ↔ (duplicate ∘ duplicate) wa ↔ wwwa ((map counit) ∘ duplicate) wa ↔ (counit ∘ duplicate) wa ↔ wa ((map map φ) ∘ duplicate) wa ↔ (duplicate ∘ (map φ)) wa ↔ wwb
And as with monads, the two forms can be converted automatically:
(map φ) wa ↔ wa =>> (φ ∘ counit) wx duplicate wa ↔ wa =>> wx
wa =>> f(wx) ↔ ((map f) ∘ duplicate) wa
A simple example is the Product comonad, which outputs values based on an input value and shared environment data. In fact, the Product
comonad is just the dual of the Writer
monad and effectively the same as the Reader
monad (both discussed below). Product
and Reader
differ only in which function signatures they accept, and how they complement those functions by wrapping or unwrapping values.
A less trivial example is the Stream comonad, which can be used to represent data streams and attach filters to the incoming signals with extend. In fact, while not as popular as monads, researchers have found comonads particularly useful for stream processing and modeling dataflow programming. [35] [36]
Due to their strict definitions, however, one cannot simply move objects back and forth between monads and comonads. As an even higher abstraction, arrows can subsume both structures, but finding more granular ways to combine monadic and comonadic code is an active area of research. [37] [38]
Alternatives for modeling computations:
Related design concepts:
Generalizations of monads:
bind
which when given a type a that may fail, and a mapping a→b that may fail, produces a result b that may fail. (Hutton, 2016) [7] lift
, along with multiple versions for different parameter counts, a detail ignored here.Identity
monad can also be viewed as emerging from adjunction of any functor with its inverse.bind
has pasted in a string
where previously only an integer
had been; that is, the programmer has constructed an adjunction: a tuple (x,s)
, denoted int * string
in the pseudocode § above.In mathematics and computer science, currying is the technique of translating a function that takes multiple arguments into a sequence of families of functions, each taking a single argument.
In programming languages, a closure, also lexical closure or function closure, is a technique for implementing lexically scoped name binding in a language with first-class functions. Operationally, a closure is a record storing a function together with an environment. The environment is a mapping associating each free variable of the function with the value or reference to which the name was bound when the closure was created. Unlike a plain function, a closure allows the function to access those captured variables through the closure's copies of their values or references, even when the function is invoked outside their scope.
In computer science, a list or sequence is a collection of items that are finite in number and in a particular order. An instance of a list is a computer representation of the mathematical concept of a tuple or finite sequence.
F# is a general-purpose, high-level, strongly typed, multi-paradigm programming language that encompasses functional, imperative, and object-oriented programming methods. It is most often used as a cross-platform Common Language Infrastructure (CLI) language on .NET, but can also generate JavaScript and graphics processing unit (GPU) code.
In category theory, a branch of mathematics, a monad is a triple consisting of a functor T from a category to itself and two natural transformations that satisfy the conditions like associativity. For example, if are functors adjoint to each other, then together with determined by the adjoint relation is a monad.
In computer science, a tagged union, also called a variant, variant record, choice type, discriminated union, disjoint union, sum type, or coproduct, is a data structure used to hold a value that could take on several different, but fixed, types. Only one of the types can be in use at any one time, and a tag field explicitly indicates which type is in use. It can be thought of as a type that has several "cases", each of which should be handled correctly when that type is manipulated. This is critical in defining recursive datatypes, in which some component of a value may have the same type as that value, for example in defining a type for representing trees, where it is necessary to distinguish multi-node subtrees and leaves. Like ordinary unions, tagged unions can save storage by overlapping storage areas for each type, since only one is in use at a time.
In mathematics, specifically in category theory, F-algebras generalize the notion of algebraic structure. Rewriting the algebraic laws in terms of morphisms eliminates all references to quantified elements from the axioms, and these algebraic laws may then be glued together in terms of a single functor F, the signature.
In computer science, arrows or bolts are a type class used in programming to describe computations in a pure and declarative fashion. First proposed by computer scientist John Hughes as a generalization of monads, arrows provide a referentially transparent way of expressing relationships between logical steps in a computation. Unlike monads, arrows don't limit steps to having one and only one input. As a result, they have found use in functional reactive programming, point-free programming, and parsers among other applications.
This is a glossary of properties and concepts in category theory in mathematics.
In computer science, a type class is a type system construct that supports ad hoc polymorphism. This is achieved by adding constraints to type variables in parametrically polymorphic types. Such a constraint typically involves a type class T
and a type variable a
, and means that a
can only be instantiated to a type whose members support the overloaded operations associated with T
.
In functional programming, the concept of catamorphism denotes the unique homomorphism from an initial algebra into some other algebra.
In many programming languages, map is a higher-order function that applies a given function to each element of a collection, e.g. a list or set, returning the results in a collection of the same type. It is often called apply-to-all when considered in functional form.
In computer programming, an anamorphism is a function that generates a sequence by repeated application of the function to its previous result. You begin with some value A and apply a function f to it to get B. Then you apply f to B to get C, and so on until some terminating condition is reached. The anamorphism is the function that generates the list of A, B, C, etc. You can think of the anamorphism as unfolding the initial value into a sequence.
In category theory, a branch of mathematics, given a morphism f: X → Y and a morphism g: Z → Y, a lift or lifting of f to Z is a morphism h: X → Z such that f = g∘h. We say that f factors through h.
In functional programming, a monad transformer is a type constructor which takes a monad as an argument and returns a monad as a result.
In programming languages and type theory, an option type or maybe type is a polymorphic type that represents encapsulation of an optional value; e.g., it is used as the return type of functions which may or may not return a meaningful value when they are applied. It consists of a constructor which either is empty, or which encapsulates the original data type A
.
In computer science, partial application refers to the process of fixing a number of arguments of a function, producing another function of smaller arity. Given a function , we might fix the first argument, producing a function of type . Evaluation of this function might be represented as . Note that the result of partial function application in this case is a function that takes two arguments. Partial application is sometimes incorrectly called currying, which is a related, but distinct concept.
This article describes the features in the programming language Haskell.
In functional programming, an applicative functor, or an applicative for short, is an intermediate structure between functors and monads. In Category Theory they are called Closed Monoidal Functors. Applicative functors allow for functorial computations to be sequenced, but don't allow using results from prior computations in the definition of subsequent ones. Applicative functors are the programming equivalent of lax monoidal functors with tensorial strength in category theory.
In functional programming, a functor is a design pattern inspired by the definition from category theory that allows one to apply a function to values inside a generic type without changing the structure of the generic type. In Haskell this idea can be captured in a type class:
{{cite journal}}
: CS1 maint: DOI inactive as of November 2024 (link)HaskellWiki references:
Tutorials:
Probability
monad for Markov chains.Interesting cases: