WikiMili The Free Encyclopedia

**Logic in computer science** covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:

**Logic**, is the systematic study of the form of valid inference, and the most general laws of truth. A valid inference is one where there is a specific relation of logical support between the assumptions of the inference and its conclusion. In ordinary discourse, inferences may be signified by words such as *therefore*, *hence*, *ergo*, and so on.

**Computer science** is the study of mathematical algorithms and processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate, store, and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems.

- Theoretical foundations and analysis
- Computers to assist logicians
- Logic applications for computers
- See also
- References
- Further reading
- External links

- Theoretical foundations and analysis
- Use of computer technology to aid logicians
- Use of concepts from logic for computer applications

Logic plays a fundamental role in computer science. Some of the key areas of logic that are particularly significant are computability theory (formerly called recursion theory), modal logic and category theory. The theory of computation is based on concepts defined by logicians and mathematicians such as Alonzo Church and Alan Turing.^{ [1] }^{ [2] } Church first showed the existence of algorithmically unsolvable problems using his notion of lambda-definability. Turing gave the first compelling analysis of what can be called a mechanical procedure and Kurt Gödel asserted that he found Turing's analysis "perfect."^{ [3] } In addition some other major areas of theoretical overlap between logic and computer science are:

**Computability theory**, also known as **recursion theory**, is a branch of mathematical logic, of computer science, and of the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, recursion theory overlaps with proof theory and effective descriptive set theory.

**Modal logic** is a type of formal logic primarily developed in the 1960s that extends classical propositional and predicate logic to include operators expressing modality. A modal—a word that expresses a modality—qualifies a statement. For example, the statement "John is happy" might be qualified by saying that John is usually happy, in which case the term "usually" is functioning as a modal. The traditional alethic modalities, or modalities of truth, include possibility, necessity, and impossibility. Other modalities that have been formalized in modal logic include temporal modalities, or modalities of time, deontic modalities, epistemic modalities, or modalities of knowledge and doxastic modalities, or modalities of belief.

**Category theory** formalizes mathematical structure and its concepts in terms of a labeled directed graph called a *category*, whose nodes are called *objects*, and whose labelled directed edges are called *arrows*. A category has two basic properties: the ability to compose the arrows associatively, and the existence of an identity arrow for each object. The language of category theory has been used to formalize concepts of other high-level abstractions such as sets, rings, and groups. Informally, category theory is a general theory of functions.

- Gödel's incompleteness theorem proves that any logical system powerful enough to characterize arithmetic will contain statements that can neither be proved nor disproved within that system. This has direct application to theoretical issues relating to the feasibility of proving the completeness and correctness of software.
^{ [4] } - The frame problem is a basic problem that must be overcome when using first-order logic to represent the goals and state of an artificial intelligence agent.
^{ [5] } - The Curry-Howard correspondence is a relation between logical systems and software. This theory established a precise correspondence between proofs and programs. In particular it showed that terms in the simply-typed lambda-calculus correspond to proofs of intuitionistic propositional logic.
- Category theory represents a view of mathematics that emphasizes the relations between structures. It is intimately tied to many aspects of computer science: type systems for programming languages, the theory of transition systems, models of programming languages and the theory of programming language semantics.
^{ [6] }

In artificial intelligence, the **frame problem** describes an issue with using first-order logic (FOL) to express facts about a robot in the world. Representing the state of a robot with traditional FOL requires the use of many axioms that simply imply that things in the environment do not change arbitrarily. For example, Hayes describes a "block world" with rules about stacking blocks together. In a FOL system, additional axioms are required to make inferences about the environment. The frame problem is the problem of finding adequate collections of axioms for a viable description of a robot environment.

**First-order logic**—also known as **predicate logic** and **first-order predicate calculus**—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as *Socrates is a man* one can have expressions in the form "there exists X such that X is Socrates and X is a man" and *there exists* is a quantifier while *X* is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

One of the first applications to use the term Artificial Intelligence was the Logic Theorist system developed by Allen Newell, J.C. Shaw, and Herbert Simon in 1956. One of the things that a logician does is to take a set of statements in logic and deduce the conclusions (additional statements) that must be true by the laws of logic. For example, If given a logical system that states "All humans are mortal" and "Socrates is human" a valid conclusion is "Socrates is mortal". Of course this is a trivial example. In actual logical systems the statements can be numerous and complex. It was realized early on that this kind of analysis could be significantly aided by the use of computers. The Logic Theorist validated the theoretical work of Bertrand Russell and Alfred North Whitehead in their influential work on mathematical logic called Principia Mathematica. In addition, subsequent systems have been utilized by logicians to validate and discover new logical theorems and proofs.^{ [7] }

**Allen Newell** was a researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University’s School of Computer Science, Tepper School of Business, and Department of Psychology. He contributed to the Information Processing Language (1956) and two of the earliest AI programs, the Logic Theory Machine (1956) and the General Problem Solver (1957). He was awarded the ACM's A.M. Turing Award along with Herbert A. Simon in 1975 for their basic contributions to artificial intelligence and the psychology of human cognition.

**Herbert Alexander Simon** was an American economist and political scientist whose primary interest was decision-making within organizations and is best known for the theories of "bounded rationality" and "satisficing". He received the Nobel Prize in Economics in 1978 and the Turing Award in 1975. His research was noted for its interdisciplinary nature and spanned across the fields of cognitive science, computer science, public administration, management, and political science. He was at Carnegie Mellon University for most of his career, from 1949 to 2001.

**Bertrand Arthur William Russell, 3rd Earl Russell**, was a British philosopher, logician, mathematician, historian, writer, social critic, political activist, and Nobel laureate. At various points in his life, Russell considered himself a liberal, a socialist and a pacifist, although he also confessed that his skeptical nature had led him to feel that he had "never been any of these things, in any profound sense." Russell was born in Monmouthshire into one of the most prominent aristocratic families in the United Kingdom.

There has always been a strong influence from mathematical logic on the field of Artificial Intelligence (AI). From the beginning of the field it was realized that technology to automate logical inferences could have great potential to solve problems and draw conclusions from facts. Ron Brachman has described first-order logic (FOL) as the metric by which all AI knowledge representation formalisms should be evaluated. There is no more general or powerful known method for describing and analyzing information than FOL. The reason FOL itself is simply not used as a computer language is that it is actually too expressive, in the sense that FOL can easily express statements that no computer, no matter how powerful, could ever solve. For this reason every form of knowledge representation is in some sense a trade off between expressivity and computability. The more expressive the language is, the closer it is to FOL, the more likely it is to be slower and prone to an infinite loop.^{ [8] }

For example, IF THEN rules used in Expert Systems approximate to a very limited subset of FOL. Rather than arbitrary formulas with the full range of logical operators the starting point is simply what logicians refer to as Modus Ponens. As a result, rule based systems can support high-performance computation, especially if they take advantage of optimization algorithms and compilation.^{ [9] }

In artificial intelligence, an **expert system** is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

In computer science, **rule-based systems** are used as a way to store and manipulate knowledge to interpret information in a useful way. They are often used in artificial intelligence applications and research.

Another major area of research for logical theory was software engineering. Research projects such as the Knowledge-Based Software Assistant and Programmer's Apprentice programs applied logical theory to validate the correctness of software specifications. They also used them to transform the specifications into efficient code on diverse platforms and to prove the equivalence between the implementation and the specification.^{ [10] } This formal transformation driven approach is often far more effortful than traditional software development. However, in specific domains with appropriate formalisms and reusable templates the approach has proven viable for commercial products. The appropriate domains are usually those such as weapons systems, security systems, and real time financial systems where failure of the system has excessively high human or financial cost. An example of such a domain is Very Large Scale Integrated (VLSI) Design—the process for designing the chips used for the CPU's and other critical components of digital devices. An error in a chip is catastrophic. Unlike software, chips can't be patched or updated. As a result, there is commercial justification for using formal methods to prove that the implementation corresponds to the specification.^{ [11] }

Another important application of logic to computer technology has been in the area of frame languages and automatic classifiers. Frame languages such ais KL-ONE have a rigid semantics. Definitions in KL-ONE can be directly mapped to set theory and the predicate calculus. This allows specialized theorem provers called classifiers to analyze the various declarations between sets, subsets, and relations in a given model. In this way the model can be validated and any inconsistent definitions flagged. The classifier can also infer new information, for example define new sets based on existing information and change the definition of existing sets based on new data. The level of flexibility is ideal for handling the ever changing world of the Internet. Classifier technology is built on top of languages such as the Web Ontology Language to allow a logical semantic level on to the existing Internet. This layer of is called the Semantic web.^{ [12] }^{ [13] }

Temporal logic is used for reasoning in concurrent systems.^{ [14] }

In the field of computer science, **artificial intelligence** (**AI**), sometimes called **machine intelligence**, is intelligence demonstrated by machines, in contrast to the **natural intelligence** displayed by humans and other animals. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. More specifically, Kaplan and Haenlein define AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".

**Automated theorem proving** is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science.

**Knowledge representation and reasoning** is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of *reasoning*, such as the application of rules or the relations of sets and subsets.

**Mathematical logic** is a subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science. The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems.

**Computer science** is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.

**Proof theory** is a major branch of mathematical logic that represents proofs as formal mathematical objects, facilitating their analysis by mathematical techniques. Proofs are typically presented as inductively-defined data structures such as plain lists, boxed lists, or trees, which are constructed according to the axioms and rules of inference of the logical system. As such, proof theory is syntactic in nature, in contrast to model theory, which is semantic in nature.

A **formal system** is used to infer theorems from axioms according to a set of rules. These rules used to carry out the inference of theorems from axioms are known as the **logical calculus** of the formal system. A formal system is essentially an "axiomatic system". In 1921, David Hilbert proposed to use such system as the foundation for the knowledge in mathematics. A formal system may represent a well-defined system of abstract thought. Spinoza's *Ethics* imitates the form of Euclid's *Elements*. Spinoza employed Euclidean elements such as "axioms" or "primitive truths", rules of inferences, etc., so that a calculus can be built using these.

A **frame language** is a technology used for knowledge representation in artificial intelligence. Frames are stored as ontologies of sets and subsets of the frame concepts. They are similar to class hierarchies in object-oriented languages although their fundamental design goals are different. Frames are focused on explicit and intuitive representation of knowledge whereas objects focus on encapsulation and information hiding. Frames originated in AI research and objects primarily in software engineering. However, in practice the techniques and capabilities of frame and object-oriented languages overlap significantly.

**Loom** is a knowledge representation language developed by researchers in the artificial intelligence research group at the University of Southern California's Information Sciences Institute. The leader of the Loom project and primary architect for Loom was Robert MacGregor. The research was primarily sponsored by the Defense Advanced Research Projects Agency (DARPA).

**Robert Anthony Kowalski** is a logician and computer scientist, who has spent most of his career in the United Kingdom.

**Indeterminacy in concurrent computation** is concerned with the effects of indeterminacy in concurrent computation. Computation is an area in which indeterminacy is becoming increasingly important because of the massive increase in concurrency due to networking and the advent of many-core computer architectures. These computer systems make use of arbiters which give rise to indeterminacy.

**Automated reasoning** is an area of computer science and mathematical logic dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science, and even philosophy.

**Logic** is the formal science of using reason and is considered a branch of both philosophy and mathematics. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct and incorrect inferences. Logicians study the criteria for the evaluation of arguments.

In foundations of mathematics, philosophy of mathematics, and philosophy of logic, **formalism** is a theory that holds that statements of mathematics and logic can be considered to be statements about the consequences of certain string manipulation rules.

In information technology a **reasoning system** is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.

The **Vienna Summer of Logic** was a scientific event in the summer of 2014, combining 12 major conferences and several workshops from the fields of mathematical logic, logic in computer science, and logic in artificial intelligence. The meetings took place from July 9 to 24, 2014, and attracted more than 2000 scientists and researchers.

A **deductive classifier** is a type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values. Compared to rule-based inference engines, which can only apply triggers like OFF or IF-THEN when a condition is not met, these classifiers seek to mimic human deductive logic.

The **Penrose–Lucas argument** is a logical argument partially based on a theory developed by mathematician and logician Kurt Gödel. In 1931, he proved that every effectively generated theory capable of proving basic arithmetic fails to be either consistent or complete. Mathematician Roger Penrose modified the argument in his first book on consciousness, *The Emperor's New Mind* (1989), where he used it to provide the basis of the theory of orchestrated objective reduction.

- ↑ Lewis, Harry R.; Christos H. Papadimitriou (1981).
*Elements of the Theory of Computation*. Englewood Cliffs, New Jersey: Prentice-Hall. ISBN 0-13-273417-6. - ↑ Davis, Martin. "Influences of Mathematical Logic on Computer Science". In Rolf Herken.
*The Universal Turing Machine*. Springer Verlag. Retrieved 26 December 2013. - ↑ Kennedy, Juliette.
*Interpreting Godel*. Cambridge University Press. Retrieved 17 August 2015. - ↑ Hofstadter, Douglas R.
*Gödel, Escher, Bach: An Eternal Golden Braid*. Basic Books. ISBN 978-0465026562. - ↑ McCarthy, J; P.J. Hayes (1969). "Some philosophical problems from the standpoint of artificial intelligence".
*Machine Intelligence*.**4**: 463–502. - ↑ Barr, Michael; Charles Wells (1990).
*Category Theory for Computer*. Prentice-Hall. - ↑ Newell, Allen; J.C. Shaw; H.C. Simon (1963). "Empirical explorations with the logic theory machine". In Ed Feigenbaum.
*Computers and Thought*. McGraw Hill. pp. 109–133. ISBN 978-0262560924. - ↑ Levesque, Hector; Ronald Brachman (1985). "A Fundamental Tradeoff in Knowledge Representation and Reasoning". In Ronald Brachman and Hector J. Levesque.
*Reading in Knowledge Representation*. Morgan Kaufmann. p. 49. ISBN 0-934613-01-X.The good news in reducing KR service to theorem proving is that we now have a very clear, very specific notion of what the KR system should do; the bad new is that it is also clear that the services can not be provided... deciding whether or not a sentence in FOL is a theorem... is unsolvable.

- ↑ Forgy, Charles (1982). "Rete: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem*" (PDF).
*Artificial Intelligence*.**19**: 17–37. doi:10.1016/0004-3702(82)90020-0. Archived from the original (PDF) on 2013-12-27. Retrieved 25 December 2013. - ↑ Rich, Charles; Richard C. Waters (November 1987). "The Programmer's Apprentice Project: A Research Overview" (PDF).
*IEE Expert Special Issue on the Interactions between Expert Systems and Software Engineering*. Retrieved 26 December 2013. - ↑ Stavridou, Victoria (1993).
*Formal Methods in Circuit Design*. Press Syndicate of the University of Cambridge. ISBN 0-521-443369 . Retrieved 26 December 2013. - ↑ MacGregor, Robert (June 1991). "Using a description classifier to enhance knowledge representation".
*IEEE Expert*.**6**(3): 41–46. doi:10.1109/64.87683 . Retrieved 10 November 2013. - ↑ Berners-Lee, Tim; James Hendler; Ora Lassila (May 17, 2001). "The Semantic Web A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities".
*Scientific American*.**284**: 34–43. doi:10.1038/scientificamerican0501-34. Archived from the original on April 24, 2013. - ↑ Colin Stirling (1992). "Modal and Temporal Logics". In S. Abramsky; D. M. Gabbay; T. S. E. Maibaum.
*Handbook of Logic in Computer Science*.**II**. Oxford University Press. pp. 477–563. ISBN 0-19-853761-1.

- Augusto, Luis M. (2017).
*Logical consequences. Theory and applications: An introduction*. London: College Publications. ISBN 978-1-84890-236-7. - Ben-Ari, Mordechai (2003).
*Mathematical Logic for Computer Science*(2nd ed.). Springer-Verlag. ISBN 1-85233-319-7. - Huth, Michael; Ryan, Mark (2004).
*Logic in Computer Science: Modelling and Reasoning about Systems*(2nd ed.). Cambridge University Press. ISBN 0-521-54310-X. - Burris, Stanley N. (1997).
*Logic for Mathematics and Computer Science*. Prentice Hall. ISBN 0-13-285974-2.

- Article on
*Logic and Artificial Intelligence*at the Stanford Encyclopedia of Philosophy. - IEEE Symposium on Logic in Computer Science (LICS)
- Alwen Tiu, Introduction to logic video recording of a lecture at ANU Logic Summer School '09 (aimed mostly at computer scientists)

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.