miniKanren is a family of programming languages for relational programming. [1] As relations are bidirectional, if miniKanren is given an expression and a desired output, miniKanren can run the expression "backward", finding all possible inputs to the expression that produce the desired output. This bidirectional behavior allows the user to constrain both the input to the program and the result of the program simultaneously. miniKanren performs an interleaved search which will eventually find any solution that exists, even if any one branch of the search tree is infinitely long and contains no solutions. If no solution exists, miniKanren may search forever if the search tree is infinite.
An example of miniKanren code is evalo
, a relational goal that relates expressions to the values that they evaluate to. When evalo
is called in miniKanren like so: (evalo q q)
, it will generate quines, that is, expressions q
that when run will evaluate to themselves. [2]
The book The Reasoned Schemer uses miniKanren to demonstrate relational programming, and provides a complete implementation in Scheme. [3] The core of the language fits on two printed pages. The Scheme implementation of miniKanren is designed to be easily understood, modified, and extended.
αleanTAP is a program written in αKanren, an extension of miniKanren for nominal logic. Given a theorem, it can find a proof, making it a theorem-prover. Given a proof, it can find the theorem, making it a theorem-checker. Given part of a proof and part of a theorem, it will fill in the missing parts of the proof and the theorem, making it a theorem-explorer. [1]
There are implementations of miniKanren in Haskell, Racket, Ruby, Clojure, JavaScript, Scala, Swift, Dart and Python. The canonical implementation is an embedded language in Scheme. The Clojure core.logic library was inspired by miniKanren.
The name kanren comes from a Japanese word (関連) meaning "relation".
In algorithmic information theory, the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.
First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.
Logic programming is a programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, answer set programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:
Lisp is a family of programming languages with a long history and a distinctive, fully parenthesized prefix notation. Originally specified in 1960, Lisp is the second-oldest high-level programming language still in common use, after Fortran. Lisp has changed since its early days, and many dialects have existed over its history. Today, the best-known general-purpose Lisp dialects are Common Lisp, Scheme, Racket and Clojure.
In propositional logic and Boolean algebra, De Morgan's laws, also known as De Morgan's theorem, are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation.
Breadth-first search (BFS) is an algorithm for searching a tree data structure for a node that satisfies a given property. It starts at the tree root and explores all nodes at the present depth prior to moving on to the nodes at the next depth level. Extra memory, usually a queue, is needed to keep track of the child nodes that were encountered but not yet explored.
Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory.
In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
In mathematics and logic, an operation is finitary if it has finite arity, i.e. if it has a finite number of input values. Similarly, an infinitary operation is one with an infinite number of input values.
In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another.
Daniel Paul Friedman is a professor of Computer Science at Indiana University in Bloomington, Indiana. His research focuses on programming languages, and he is a prominent author in the field.
In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.
The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions.
Metamath is a formal language and an associated computer program for archiving, verifying, and studying mathematical proofs. Several databases of proved theorems have been developed using Metamath covering standard results in logic, set theory, number theory, algebra, topology and analysis, among others.
SLD resolution is the basic inference rule used in logic programming. It is a refinement of resolution, which is both sound and refutation complete for Horn clauses.
In database theory, a conjunctive query is a restricted form of first-order queries using the logical conjunction operator. Many first-order queries can be written as conjunctive queries. In particular, a large part of queries issued on relational databases can be expressed in this way. Conjunctive queries also have a number of desirable theoretical properties that larger classes of queries do not share.
In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer. The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether arbitrary programs eventually halt when run.
In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs.
Concolic testing is a hybrid software verification technique that performs symbolic execution, a classical technique that treats program variables as symbolic variables, along a concrete execution path. Symbolic execution is used in conjunction with an automated theorem prover or constraint solver based on constraint logic programming to generate new concrete inputs with the aim of maximizing code coverage. Its main focus is finding bugs in real-world software, rather than demonstrating program correctness.
In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.
In mathematics and mathematical logic, Boolean algebra is a branch of algebra. It differs from elementary algebra in two ways. First, the values of the variables are the truth values true and false, usually denoted 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra uses logical operators such as conjunction (and) denoted as ∧, disjunction (or) denoted as ∨, and the negation (not) denoted as ¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction and division. Boolean algebra is therefore a formal way of describing logical operations, in the same way that elementary algebra describes numerical operations.
{{cite journal}}
: CS1 maint: multiple names: authors list (link)