Propositional representation

Last updated

Propositional representation is the psychological theory, first developed in 1973 by Dr. Zenon Pylyshyn, [1] that mental relationships between objects are represented by symbols and not by mental images of the scene. [2]

Examples

A propositional network describing the sentence "John believes that Anna will pass her exam" is illustrated below.

Figure 1: A Propositional Network Proprep2.png
Figure 1: A Propositional Network

Each circle represents a single proposition, and the connections between the circles describe a network of propositions.

Another example is the sentence "Debby donated a big amount of money to Greenpeace, an organisation which protects the environment", which contains the propositions "Debby donated money to Greenpeace", "The amount of money was big" and "Greenpeace protects the environment". If one or more of the propositions is false, the whole sentence is false. This is illustrated in Figure 2:

Figure 2: A more complex propositional network Proprep1.png
Figure 2: A more complex propositional network

Propositional representations are also:

Each proposition consists of a set of predicates and arguments which are represented in the form of predicate calculus. For instance:

An event; (X) John hit Chris with a unicycle, the unicycle broke, because of this John started to cry, which caused Chris to be happy.

A propositional representation

Each set of predicates (words like hit, broke, cry, happy are first order-predicates; Cause is a second-order predicate) and arguments (often consisting of an agent/subject (e.g. John in ‘P’), a recipient/object (e.g. Chris in ‘P’) and an instrument (e.g. the unicycle in ‘P’)) are in turn manipulated as propositions: event/statement “John hit Chris with the unicycle” is represented as proposition ‘P’.[ citation needed ]

Also, features of particular objects may be characterized through attribute lists. ‘John’ as a singular object may have the attributes ‘plays guitar’, ‘juggles’, ‘eats a lot’, ‘rides a unicycle’ etc. Thus reference to ‘John’ identifies him as the object of thought in virtue of his having certain of these attributes. So in predicate calculus, if “John (F) has the property of being ‘rides a unicycle’ (x)” we may say salva veritate: (x)(Fx). These elements have been called semantic primitives or semantic markers/features. Each primitive may in turn form part of a propositional statement, which in turn could be represented by an abstract figure e.g. ‘P’. The primitives themselves play a crucial role in categorizing and classifying objects and concepts.[ citation needed ]

The meaningful relations between ideas and concepts expressed between and within the propositions are in part dealt with through the general laws of inference. One of the most common of these is Modus Ponens Ponendum (MPP), which is a simple inference of relation between two objects, the latter supervening on the former (P-›Q). Thus if we have two propositions (P, Q) and we assume a law of inference that relates to them both (P-›Q), then if we have P we must necessarily have Q. Relations of causation and may be expressed in this fashion, i.e. one state (P) causing (-›) another (Q)[ citation needed ]

So a purely formal characterization of the event (X) written above in natural language would be something like:

Related Research Articles

First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

Propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives. Propositions that contain no logical connectives are called atomic propositions.

<i>Principia Mathematica</i> Three-volume work on the foundations of mathematics

The Principia Mathematica is a three-volume work on the foundations of mathematics written by the mathematicians Alfred North Whitehead and Bertrand Russell and published in 1910, 1912, and 1913. In 1925–27, it appeared in a second edition with an important Introduction to the Second Edition, an Appendix A that replaced ✸9 and all-new Appendix B and Appendix C. PM is not to be confused with Russell's 1903 The Principles of Mathematics. PM was originally conceived as a sequel volume to Russell's 1903 Principles, but as PM states, this became an unworkable suggestion for practical and philosophical reasons: "The present work was originally intended by us to be comprised in a second volume of Principles of Mathematics... But as we advanced, it became increasingly evident that the subject is a very much larger one than we had supposed; moreover on many fundamental questions which had been left obscure and doubtful in the former work, we have now arrived at what we believe to be satisfactory solutions."

In linguistics and philosophy, a proposition is the meaning of a declarative sentence, where "meaning" is understood to be a non-linguistic entity which is shared by all sentences with the same meaning. Equivalently, a proposition is the non-linguistic bearer of truth or falsity which makes any sentence that expresses it either true or false.

In logic and mathematics, the converse of a categorical or implicational statement is the result of reversing its two constituent statements. For the implication PQ, the converse is QP. For the categorical proposition All S are P, the converse is All P are S. Either way, the truth of the converse is generally independent from that of the original statement.

In mathematical logic, sequent calculus is, in essence, a style of formal logical argumentation where every line of a proof is a conditional tautology instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than David Hilbert's earlier style of formal logic where every line was an unconditional tautology. There may be more subtle distinctions to be made; for example, there may be non-logical axioms upon which all propositions are implicitly dependent. Then sequents signify conditional theorems in a first-order language rather than conditional tautologies.

In mathematical logic, a sequent is a very general kind of conditional assertion.

Metalogic is the study of the metatheory of logic. Whereas logic studies how logical systems can be used to construct valid and sound arguments, metalogic studies the properties of logical systems. Logic concerns the truths that may be derived using a logical system; metalogic concerns the truths that may be derived about the languages and systems that are used to express truths.

A formal system is used for inferring theorems from axioms according to a set of rules. These rules, which are used for carrying out the inference of theorems from axioms, are the logical calculus of the formal system. A formal system is essentially an "axiomatic system".

In mathematical logic, propositional logic and predicate logic, a well-formed formula, abbreviated WFF or wff, often simply formula, is a finite sequence of symbols from a given alphabet that is part of a formal language. A formal language can be identified with the set of formulas in the language.

The language of thought hypothesis (LOTH), sometimes known as thought ordered mental expression (TOME), is a view in linguistics, philosophy of mind and cognitive science, forwarded by American philosopher Jerry Fodor. It describes the nature of thought as possessing "language-like" or compositional structure. On this view, simple concepts combine in systematic ways to build thoughts. In its most basic form, the theory states that thought, like language, has syntax.

The laws of thought are fundamental axiomatic rules upon which rational discourse itself is often considered to be based. The formulation and clarification of such rules have a long tradition in the history of philosophy and logic. Generally they are taken as laws that guide and underlie everyone's thinking, thoughts, expressions, discussions, etc. However, such classical ideas are often questioned or rejected in more recent developments, such as intuitionistic logic, dialetheism and fuzzy logic.

In propositional logic, a propositional formula is a type of syntactic formula which is well formed and has a truth value. If the values of all variables in a propositional formula are given, it determines a unique truth value. A propositional formula may also be called a propositional expression, a sentence, or a sentential formula.

Logic is the formal science of using reason and is considered a branch of both philosophy and mathematics and to a lesser extent computer science. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct and incorrect inferences. Logicians study the criteria for the evaluation of arguments.

In logic, the monadic predicate calculus is the fragment of first-order logic in which all relation symbols in the signature are monadic, and there are no function symbols. All atomic formulas are thus of the form , where is a relation symbol and is a variable.

In logic, especially mathematical logic, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.

In mathematical logic, formation rules are rules for describing which strings of symbols formed from the alphabet of a formal language are syntactically valid within the language. These rules only address the location and manipulation of the strings of the language. It does not describe anything else about a language, such as its semantics. .

System L is a natural deductive logic developed by E.J. Lemmon. Derived from Suppes' method, it represents natural deduction proofs as sequences of justified steps. Both methods are derived from Gentzen's 1934/1935 natural deduction system, in which proofs were presented in tree-diagram form rather than in the tabular form of Suppes and Lemmon. Although the tree-diagram layout has advantages for philosophical and educational purposes, the tabular layout is much more convenient for practical applications.

In logic, a quantifier is an operator that specifies how many individuals in the domain of discourse satisfy an open formula. For instance, the universal quantifier in the first order formula expresses that everything in the domain satisfies the property denoted by . On the other hand, the existential quantifier in the formula expresses that there is something in the domain which satisfies that property. A formula where a quantifier takes widest scope is called a quantified formula. A quantified formula must contain a bound variable and a subformula specifying a property of the referent of that variable.

Logic The study of inference and truth

Logic is the systematic study of valid rules of inference, i.e. the relations that lead to the acceptance of one proposition on the basis of a set of other propositions (premises). More broadly, logic is the analysis and appraisal of arguments.

References

  1. Pylyshyn, Zenon W (1973). "What the mind's eye tells the mind's brain: A critique of mental imagery". Psychological Bulletin. 80: 1–36. doi:10.1007/978-94-010-1193-8_1. ISBN   978-94-010-1195-2.
  2. Elport, Daniel "Cognitive Psychology and Cognitive Neuroscience", Wikibooks, July 2007, accessed March 07, 2011.