Logic in computer science

Last updated
Diagrammatic representation of computer logic gates Logic Gates.svg
Diagrammatic representation of computer logic gates

Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:

Contents

Theoretical foundations and analysis

Logic plays a fundamental role in computer science. Some of the key areas of logic that are particularly significant are computability theory (formerly called recursion theory), modal logic and category theory. The theory of computation is based on concepts defined by logicians and mathematicians such as Alonzo Church and Alan Turing. [1] [2] Church first showed the existence of algorithmically unsolvable problems using his notion of lambda-definability. Turing gave the first compelling analysis of what can be called a mechanical procedure and Kurt Gödel asserted that he found Turing's analysis "perfect." [3] . In addition some other major areas of theoretical overlap between logic and computer science are:

Computers to assist logicians

One of the first applications to use the term artificial intelligence was the Logic Theorist system developed by Allen Newell, Cliff Shaw, and Herbert Simon in 1956. One of the things that a logician does is to take a set of statements in logic and deduce the conclusions (additional statements) that must be true by the laws of logic. For example, if given the statements "All humans are mortal" and "Socrates is human" a valid conclusion is "Socrates is mortal". Of course this is a trivial example. In actual logical systems the statements can be numerous and complex. It was realized early on that this kind of analysis could be significantly aided by the use of computers. Logic Theorist validated the theoretical work of Bertrand Russell and Alfred North Whitehead in their influential work on mathematical logic called Principia Mathematica . In addition, subsequent systems have been utilized by logicians to validate and discover new mathematical theorems and proofs. [7]

Logic applications for computers

There has always been a strong influence from mathematical logic on the field of artificial intelligence (AI). From the beginning of the field it was realized that technology to automate logical inferences could have great potential to solve problems and draw conclusions from facts. Ron Brachman has described first-order logic (FOL) as the metric by which all AI knowledge representation formalisms should be evaluated. First-order logic is a general and powerful method for describing and analyzing information. The reason FOL itself is simply not used as a computer language is that it is actually too expressive, in the sense that FOL can easily express statements that no computer, no matter how powerful, could ever solve. For this reason every form of knowledge representation is in some sense a trade off between expressivity and computability. The more expressive the language is, the closer it is to FOL, the more likely it is to be slower and prone to an infinite loop. [8]

For example, IF–THEN rules used in expert systems approximate to a very limited subset of FOL. Rather than arbitrary formulas with the full range of logical operators the starting point is simply what logicians refer to as modus ponens. As a result, rule-based systems can support high-performance computation, especially if they take advantage of optimization algorithms and compilation. [9]

On the other hand, logic programming, which combines the Horn clause subset of first-order logic with a non-monotonic form of negation, has both high expressive power and efficient implementations. In particular, the logic programming language Prolog is a Turing complete programming language. Datalog extends the relational database model with recursive relations, while answer set programming is a form of logic programming oriented towards difficult (primarily NP-hard) search problems.

Another major area of research for logical theory is software engineering. Research projects such as the Knowledge Based Software Assistant and Programmer's Apprentice programs have applied logical theory to validate the correctness of software specifications. They have also used logical tools to transform the specifications into efficient code on diverse platforms and to prove the equivalence between the implementation and the specification. [10] This formal transformation-driven approach is often far more effortful than traditional software development. However, in specific domains with appropriate formalisms and reusable templates the approach has proven viable for commercial products. The appropriate domains are usually those such as weapons systems, security systems, and real-time financial systems where failure of the system has excessively high human or financial cost. An example of such a domain is Very Large Scale Integrated (VLSI) design—the process for designing the chips used for the CPUs and other critical components of digital devices. An error in a chip can be catastrophic. Unlike software, chips can't be patched or updated. As a result, there is commercial justification for using formal methods to prove that the implementation corresponds to the specification. [11]

Another important application of logic to computer technology has been in the area of frame languages and automatic classifiers. Frame languages such as KL-ONE can be directly mapped to set theory and first-order logic. This allows specialized theorem provers called classifiers to analyze the various declarations between sets, subsets, and relations in a given model. In this way the model can be validated and any inconsistent definitions flagged. The classifier can also infer new information, for example define new sets based on existing information and change the definition of existing sets based on new data. The level of flexibility is ideal for handling the ever changing world of the Internet. Classifier technology is built on top of languages such as the Web Ontology Language to allow a logical semantic level on top of the existing Internet. This layer is called the Semantic Web. [12] [13]

Temporal logic is used for reasoning in concurrent systems. [14]

See also

Related Research Articles

Knowledge representation and reasoning is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems, and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning.

Logic programming is a programming, database and knowledge representation paradigm based on formal logic. A logic program is a set of sentences in logical form, representing knowledge about some problem domain. Computation is performed by applying logical reasoning to that knowledge, to solve problems in the domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:

Mathematical logic is the study of formal logic within mathematics. Major subareas include model theory, proof theory, set theory, and recursion theory. Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic such as their expressive or deductive power. However, it can also include uses of logic to characterize correct mathematical reasoning or to establish foundations of mathematics.

In computability theory, a system of data-manipulation rules is said to be Turing-complete or computationally universal if it can be used to simulate any Turing machine. This means that this system is able to recognize or decide other data-manipulation rule sets. Turing completeness is used as a way to express the power of such a data-manipulation rule set. Virtually all programming languages today are Turing-complete.

<span class="mw-page-title-main">Stephen Cook</span> American-Canadian computer scientist, contributor to complexity theory

Stephen Arthur Cook is an American-Canadian computer scientist and mathematician who has made significant contributions to the fields of complexity theory and proof complexity. He is a university professor emeritus at the University of Toronto, Department of Computer Science and Department of Mathematics.

The history of logic deals with the study of the development of the science of valid inference (logic). Formal logics developed in ancient times in India, China, and Greece. Greek methods, particularly Aristotelian logic as found in the Organon, found wide application and acceptance in Western science and mathematics for millennia. The Stoics, especially Chrysippus, began the development of predicate logic.

Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.

Hao Wang was a Chinese-American logician, philosopher, mathematician, and commentator on Kurt Gödel.

Loom is a knowledge representation language developed by researchers in the artificial intelligence research group at the University of Southern California's Information Sciences Institute. The leader of the Loom project and primary architect for Loom was Robert MacGregor. The research was primarily sponsored by the Defense Advanced Research Projects Agency (DARPA).

<span class="mw-page-title-main">Robert Kowalski</span> British computer scientist (born 1941)

Robert Anthony Kowalski is an American-British logician and computer scientist, whose research is concerned with developing both human-oriented models of computing and computational models of human thinking. He has spent most of his career in the United Kingdom.

Indeterminacy in concurrent computation is concerned with the effects of indeterminacy in concurrent computation. Computation is an area in which indeterminacy is becoming increasingly important because of the massive increase in concurrency due to networking and the advent of many-core computer architectures. These computer systems make use of arbiters which gives rise to indeterminacy.

In computer science, in particular in knowledge representation and reasoning and metalogic, the area of automated reasoning is dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science and philosophy.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

Logic is the formal science of using reason and is considered a branch of both philosophy and mathematics and to a lesser extent computer science. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct and incorrect inferences. Logicians study the criteria for the evaluation of arguments.

Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". They were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge". Frames are the primary data structure used in artificial intelligence frame languages; they are stored as ontologies of sets.

In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs.

In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.

A deductive classifier is a type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values. The classifier determines if the various declarations are logically consistent and if not will highlight the specific inconsistent declarations and the inconsistencies among them. If the declarations are consistent the classifier can then assert additional information based on the input. For example, it can add information about existing classes, create additional classes, etc. This differs from traditional inference engines that trigger off of IF-THEN conditions in rules. Classifiers are also similar to theorem provers in that they take as input and produce output via first-order logic. Classifiers originated with KL-ONE frame languages. They are increasingly significant now that they form a part in the enabling technology of the Semantic Web. Modern classifiers leverage the Web Ontology Language. The models they analyze and generate are called ontologies.

The Penrose–Lucas argument is a logical argument partially based on a theory developed by mathematician and logician Kurt Gödel. In 1931, he proved that every effectively generated theory capable of proving basic arithmetic either fails to be consistent or fails to be complete. Due to human ability to see the truth of formal system's Gödel sentences, it is argued that the human mind cannot be computed on a Turing machine that works on Peano arithmetic because the latter cannot see the truth value of its Gödel sentence, while human minds can. Mathematician Roger Penrose modified the argument in his first book on consciousness, The Emperor's New Mind (1989), where he used it to provide the basis of his theory of consciousness: orchestrated objective reduction.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

References

  1. Lewis, Harry R. (1981). Elements of the Theory of Computation. Prentice Hall.
  2. Davis, Martin (11 May 1995). "Influences of Mathematical Logic on Computer Science". In Rolf Herken (ed.). The Universal Turing Machine. Springer Verlag. ISBN   9783211826379 . Retrieved 26 December 2013.
  3. Kennedy, Juliette (2014-08-21). Interpreting Godel. Cambridge University Press. ISBN   9781107002661 . Retrieved 17 August 2015.
  4. Hofstadter, Douglas R. (1999-02-05). Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books. ISBN   978-0465026562.
  5. McCarthy, John; P.J. Hayes (1969). "Some philosophical problems from the standpoint of artificial intelligence" (PDF). Machine Intelligence. 4: 463–502.
  6. Barr, Michael; Charles Wells (1998). Category Theory for Computing Science (PDF). Centre de Recherches Mathématiques.
  7. Newell, Allen; J.C. Shaw; H.C. Simon (1963). "Empirical explorations with the logic theory machine" . In Ed Feigenbaum (ed.). Computers and Thought. McGraw Hill. pp.  109–133. ISBN   978-0262560924.
  8. Levesque, Hector; Ronald Brachman (1985). "A Fundamental Tradeoff in Knowledge Representation and Reasoning". In Ronald Brachman and Hector J. Levesque (ed.). Reading in Knowledge Representation. Morgan Kaufmann. p.  49. ISBN   0-934613-01-X. The good news in reducing KR service to theorem proving is that we now have a very clear, very specific notion of what the KR system should do; the bad new is that it is also clear that the services can not be provided... deciding whether or not a sentence in FOL is a theorem... is unsolvable.
  9. Forgy, Charles (1982). "Rete: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem*" (PDF). Artificial Intelligence . 19: 17–37. doi:10.1016/0004-3702(82)90020-0. Archived from the original (PDF) on 2013-12-27. Retrieved 25 December 2013.
  10. Rich, Charles; Richard C. Waters (November 1987). "The Programmer's Apprentice Project: A Research Overview" (PDF). IEEE Expert. Archived from the original (PDF) on 2017-07-06. Retrieved 26 December 2013.
  11. Stavridou, Victoria (1993). Formal Methods in Circuit Design. Press Syndicate of the University of Cambridge. ISBN   0-521-443369 . Retrieved 26 December 2013.
  12. MacGregor, Robert (June 1991). "Using a description classifier to enhance knowledge representation". IEEE Expert. 6 (3): 41–46. doi:10.1109/64.87683. S2CID   29575443.
  13. Berners-Lee, Tim; James Hendler; Ora Lassila (May 17, 2001). "The Semantic Web A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities". Scientific American . 284: 34–43. doi:10.1038/scientificamerican0501-34. Archived from the original on April 24, 2013.
  14. Colin Stirling (1992). "Modal and Temporal Logics". In S. Abramsky; D. M. Gabbay; T. S. E. Maibaum (eds.). Handbook of Logic in Computer Science. Vol. II. Oxford University Press. pp. 477–563. ISBN   0-19-853761-1.

Further reading