Logic for Computable Functions

Last updated

Logic for Computable Functions (LCF) is an interactive automated theorem prover developed at Stanford and Edinburgh by Robin Milner and collaborators in early 1970s, based on the theoretical foundation of logic of computable functions previously proposed by Dana Scott. Work on the LCF system introduced the general-purpose programming language ML to allow users to write theorem-proving tactics, supporting algebraic data types, parametric polymorphism, abstract data types, and exceptions.

Contents

Basic idea

Theorems in the system are terms of a special "theorem" abstract data type. The general mechanism of abstract data types of ML ensures that theorems are derived using only the inference rules given by the operations of the theorem abstract type. Users can write arbitrarily complex ML programs to compute theorems; the validity of theorems does not depend on the complexity of such programs, but follows from the soundness of the abstract data type implementation and the correctness of the ML compiler.

Advantages

The LCF approach provides similar trustworthiness to systems that generate explicit proof certificates but without the need to store proof objects in memory. The Theorem data type can be easily implemented to optionally store proof objects, depending on the system's run-time configuration, so it generalizes the basic proof-generation approach. The design decision to use a general-purpose programming language for developing theorems means that, depending on the complexity of programs written, it is possible to use the same language to write step-by-step proofs, decision procedures, or theorem provers.

Disadvantages

Trusted computing base

The implementation of the underlying ML compiler adds to the trusted computing base. Work on CakeML [1] resulted in a formally verified ML compiler, alleviating some of these concerns.

Efficiency and complexity of proof procedures

Theorem proving often benefits from decision procedures and theorem proving algorithms, whose correctness has been extensively analyzed. A straightforward way of implementing these procedures in an LCF approach requires such procedures to always derive outcomes from the axioms, lemmas, and inference rules of the system, as opposed to directly computing the outcome. A potentially more efficient approach is to use reflection to prove that a function operating on formulas always gives correct result. [2]

Influences

Among subsequent implementations is Cambridge LCF. Later systems simplified the logic to use total instead of partial functions, leading to HOL, HOL Light, and the Isabelle proof assistant that supports various logics. As of 2019, the Isabelle proof assistant still contains an implementation of an LCF logic, Isabelle/LCF.

Notes

  1. "CakeML" . Retrieved 2 November 2019.
  2. Boyer, Robert S; Moore, J Strother. Metafunctions: Proving Them Correct and Using Them Efficiently as New Proof Procedures (PDF) (Report). Technical Report CSL-108, SRI Projects 8527/4079. pp. 1–111. Archived (PDF) from the original on November 2, 2019. Retrieved 2 November 2019.

Related Research Articles

Automated theorem proving is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science.

ML is a general-purpose, high-level, functional programming language. It is known for its use of the polymorphic Hindley–Milner type system, which automatically assigns the data types of most expressions without requiring explicit type annotations, and ensures type safety; there is a formal proof that a well-typed ML program does not cause runtime type errors. ML provides pattern matching for function arguments, garbage collection, imperative programming, call-by-value and currying. While a general-purpose programming language, ML is used heavily in programming language research and is one of the few languages to be completely specified and verified using formal semantics. Its types and pattern matching make it well-suited and commonly used to operate on other formal languages, such as in compiler writing, automated theorem proving, and formal verification.

Prolog is a logic programming language that has its origins in artificial intelligence, automated theorem proving and computational linguistics.

OCaml is a general-purpose, high-level, multi-paradigm programming language which extends the Caml dialect of ML with object-oriented features. OCaml was created in 1996 by Xavier Leroy, Jérôme Vouillon, Damien Doligez, Didier Rémy, Ascánder Suárez, and others.

<span class="mw-page-title-main">Robin Milner</span> British computer scientist (1934–2010)

Arthur John Robin Gorell Milner was a British computer scientist, and a Turing Award winner.

Standard ML (SML) is a general-purpose, high-level, modular, functional programming language with compile-time type checking and type inference. It is popular for writing compilers, for programming language research, and for developing theorem provers.

<span class="mw-page-title-main">Isabelle (proof assistant)</span> Higher-order logic (HOL) automated theorem prover

The Isabelle automated theorem prover is a higher-order logic (HOL) theorem prover, written in Standard ML and Scala. As an LCF-style theorem prover, it is based on a small logical core (kernel) to increase the trustworthiness of proofs without requiring — yet supporting — explicit proof objects.

HOL denotes a family of interactive theorem proving systems using similar (higher-order) logics and implementation strategies. Systems in this family follow the LCF approach as they are implemented as a library which defines an abstract data type of proven theorems such that new objects of this type can only be created using the functions in the library which correspond to inference rules in higher-order logic. As long as these functions are correctly implemented, all theorems proven in the system must be valid. As such, a large system can be built on top of a small trusted kernel.

Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.

In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of a system with respect to a certain formal specification or property, using formal methods of mathematics. Formal verification is a key incentive for formal specification of systems, and is at the core of formal methods. It represents an important dimension of analysis and verification in electronic design automation and is one approach to software verification. The use of formal verification enables the highest Evaluation Assurance Level (EAL7) in the framework of common criteria for computer security certification.

Type inference, sometimes called type reconstruction, refers to the automatic detection of the type of an expression in a formal language. These include programming languages and mathematical type systems, but also natural languages in some branches of computer science and linguistics.

<span class="mw-page-title-main">Coq (software)</span> Proof assistant

Coq is an interactive theorem prover first released in 1989. It allows for expressing mathematical assertions, mechanically checks proofs of these assertions, helps find formal proofs, and extracts a certified program from the constructive proof of its formal specification. Coq works within the theory of the calculus of inductive constructions, a derivative of the calculus of constructions. Coq is not an automated theorem prover but includes automatic theorem proving tactics (procedures) and various decision procedures.

<span class="mw-page-title-main">Proof assistant</span> Software tool to assist with the development of formal proofs by human-machine collaboration

In computer science and mathematical logic, a proof assistant or interactive theorem prover is a software tool to assist with the development of formal proofs by human-machine collaboration. This involves some sort of interactive proof editor, or other interface, with which a human can guide the search for proofs, the details of which are stored in, and some steps provided by, a computer.

Refinement is a generic term of computer science that encompasses various approaches for producing correct computer programs and simplifying existing programs to enable their formal verification.

Twelf is an implementation of the logical framework LF developed by Frank Pfenning and Carsten Schürmann at Carnegie Mellon University. It is used for logic programming and for the formalization of programming language theory.

In computer science, in particular in knowledge representation and reasoning and metalogic, the area of automated reasoning is dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science and philosophy.

In computing, compiler correctness is the branch of computer science that deals with trying to show that a compiler behaves according to its language specification. Techniques include developing the compiler using formal methods and using rigorous testing on an existing compiler.

In computer science, interference freedom is a technique for proving partial correctness of concurrent programs with shared variables. Hoare logic had been introduced earlier to prove correctness of sequential programs. In her PhD thesis under advisor David Gries, Susan Owicki extended this work to apply to concurrent programs.

References