Semantic holism

Last updated

Semantic holism is a theory in the philosophy of language to the effect that a certain part of language, be it a term or a complete sentence, can only be understood through its relations to a (previously understood) larger segment of language. There is substantial controversy, however, as to exactly what the larger segment of language in question consists of. In recent years, the debate surrounding semantic holism, which is one among the many forms of holism that are debated and discussed in contemporary philosophy, has tended to centre on the view that the "whole" in question consists of an entire language.

A philosophical theory or philosophical position is a set of beliefs that explains or accounts for a general philosophy or specific branch of philosophy. The use of the term theory here is a statement of colloquial English and not reflective of the term theory. While any sort of thesis or opinion may be termed a position, in analytic philosophy it is thought best to reserve the word "theory" for systematic, comprehensive attempts to solve problems.

Philosophy of language, in the analytical tradition, explored logic, the nature of meaning, and accounts of the mind.

Holism is the idea that systems and their properties should be viewed as wholes, not just as a collection of parts.

Contents

Background

Since the use of a linguistic expression is only possible if the speaker who uses it understands its meaning, one of the central problems for analytic philosophers has always been the question of meaning. What is it? Where does it come from? How is it communicated? And, among these questions, what is the smallest unit of meaning, the smallest fragment of language with which it is possible to communicate something? At the end of the 19th and beginning of the 20th century, Gottlob Frege and his followers abandoned the view, common at the time, that a word gets its meaning in isolation, independently from all the rest of the words in a language. Frege, as an alternative, formulated his famous context principle, according to which it is only within the context of an entire sentence that a word acquires its meaning. In the 1950s, the agreement that seemed to have been reached regarding the primacy of sentences in semantic questions began to unravel with the collapse of the movement of logical positivism and the powerful influence exercised by the philosophical investigations of the later Wittgenstein. Wittgenstein wrote in the Philosophical Investigations , in fact, that "comprehending a proposition means comprehending a language". About the same time or shortly after, W. V. O. Quine wrote that "the unit of measure of empirical meaning is all of science in its globality"; and Donald Davidson, in 1967, put it even more sharply by saying that "a sentence (and therefore a word) has meaning only in the context of a (whole) language".

Pragmatics is a subfield of linguistics and semiotics that studies the ways in which context contributes to meaning. Pragmatics encompasses speech act theory, conversational implicature, talk in interaction and other approaches to language behavior in philosophy, sociology, linguistics and anthropology. Unlike semantics, which examines meaning that is conventional or "coded" in a given language, pragmatics studies how the transmission of meaning depends not only on structural and linguistic knowledge of the speaker and listener, but also on the context of the utterance, any pre-existing knowledge about those involved, the inferred intent of the speaker, and other factors. In this respect, pragmatics explains how language users are able to overcome apparent ambiguity, since meaning relies on the manner, place, time, etc. of an utterance.

Gottlob Frege mathematician, logician, philosopher

Friedrich Ludwig Gottlob Frege was a German philosopher, logician, and mathematician. He is understood by many to be the father of analytic philosophy, concentrating on the philosophy of language and mathematics. Though largely ignored during his lifetime, Giuseppe Peano (1858–1932) and Bertrand Russell (1872–1970) introduced his work to later generations of logicians and philosophers.

In the philosophy of language, the context principle is a form of semantic holism holding that a philosopher should "never ... ask for the meaning of a word in isolation, but only in the context of a proposition".

Problems

If semantic holism is interpreted as the thesis that any linguistic expression E (a word, a phrase or sentence) of some natural language L cannot be understood in isolation and that there are inevitably many ties between the expressions of L, it follows that to understand E one must understand a set K of expressions to which E is related. If, in addition, no limits are placed on the size of K (as in the cases of Davidson, Quine and, perhaps, Wittgenstein), then K coincides with the "whole" of L.

The many and substantial problems with this position have been described by Michael Dummett, Jerry Fodor, Ernest Lepore and others. In the first place, it is impossible to understand how a speaker of L can acquire knowledge of (learn) the meaning of E, for any expression E of the language. Given the limits of our cognitive abilities, we will never be able to master the whole of the English (or Italian or German) language, even on the assumption that languages are static and immutable entities (which is false). Therefore, if one must understand all of a natural language L to understand the single word or expression E, then language learning is simply impossible.

Michael Dummett British academic and philosopher

Sir Michael Anthony Eardley Dummett, FBA was an English academic described as "among the most significant British philosophers of the last century and a leading campaigner for racial tolerance and equality." He was, until 1992, Wykeham Professor of Logic at the University of Oxford. He wrote on the history of analytic philosophy, notably as an interpreter of Frege, and made original contributions particularly in the philosophies of mathematics, logic, language and metaphysics. He was known for his work on truth and meaning and their implications to debates between realism and anti-realism, a term he helped to popularize. He devised the Quota Borda system of proportional voting, based on the Borda count. In mathematical logic, he developed an intermediate logic, already studied by Kurt Gödel: the Gödel–Dummett logic.

Jerry Fodor American philosopher

Jerry Alan Fodor was an American philosopher and cognitive scientist. He held the position of State of New Jersey Professor of Philosophy, Emeritus, at Rutgers University and was the author of many works in the fields of philosophy of mind and cognitive science, in which he laid the groundwork for the modularity of mind and the language of thought hypotheses, among other ideas. He was known for his provocative and sometimes polemical style of argumentation and as "one of the principal philosophers of mind of the late twentieth and early twenty-first century. In addition to having exerted an enormous influence on virtually every portion of the philosophy of mind literature since 1960, Fodor's work has had a significant impact on the development of the cognitive sciences."

Ernest or Ernie Lepore is an American philosopher and cognitive scientist. He is Acting Director of the Rutgers Center for Cognitive Science, and a professor at Rutgers University. He is well known for his work on the philosophy of language and mind, often in collaboration with Jerry Fodor, Herman Cappelen and Kirk Ludwig, as well as his work on philosophical logic and the philosophy of Donald Davidson. Lepore earned his Ph.D. from the University of Minnesota. Lepore currently teaches logic at Rutgers University.

Semantic holism, in this sense, also fails to explain how two speakers can mean the same thing when using the same linguistic expression, and therefore how communication is even possible between them. Given a sentence P, since Fred and Mary have each mastered different parts of the English language and P is related to the sentences in each part differently, the result is that P means one thing for Fred and something else for Mary. Moreover, if a sentence P derives its meaning from the relations it entertains with the totality of sentences of a language, as soon as the vocabulary of an individual changes by the addition or elimination of a sentence P', the totality of relations changes, and therefore also the meaning of P. As this is a very common phenomenon, the result is that P has two different meanings in two different moments during the life of the same person. Consequently, if I accept the truth of a sentence and then reject it later on, the meaning of what I rejected and what I accepted are completely different, and therefore I cannot change my opinions regarding the same sentences.

Holism of mental content

These sorts of counterintuitive consequences of semantic holism also affect another form of holism, often identified with but, in fact, distinct from semantic holism: the holism of mental content. This is the thesis that the meaning of a particular propositional attitude (thought, desire, belief) acquires its content by virtue of the role that it plays within the web that connects it to all the other propositional attitudes of an individual. Since there is a very tight relationship between the content of a mental state M and the sentence P, which expresses it and makes it publicly communicable, the tendency in recent discussion is to consider the term "content" to apply indifferently both to linguistic expressions and to mental states, regardless of the extremely controversial question of which category (the mental or the linguistic) has priority over the other and which, instead, possesses only a derived meaning. So, it would seem that semantic holism ties the philosopher's hands. By making it impossible to explain language learning and to provide a unique and consistent description of the meanings of linguistic expressions, it blocks off any possibility of formulating a theory of meaning; and, by making it impossible to individuate the exact contents of any propositional attitude—given the necessity of considering a potentially infinite and continuously evolving set of mental states—it blocks off the possibility of formulating a theory of the mind.

A propositional attitude is a mental state held by an agent toward a proposition.

Confirmation holism

The key to answering this question lies in going back to Quine and his attack on logical positivism. The logical positivists, who dominated the philosophical scene for almost the entire first half of the twentieth century, maintained that genuine knowledge consisted in all and only such knowledge as was capable of manifesting a strict relationship with empirical experience. Therefore, they believed, the only linguistic expressions (manifestations of knowledge) that had meaning were those that either directly referred to observable entities, or that could be reduced to a vocabulary that directly referred to such entities. A sentence S contained knowledge only if it possessed a meaning, and it possessed a meaning only if it was possible to refer to a set of experiences that could, at least potentially, verify it and to another set that could potentially falsify it. Underlying all this, there is an implicit and powerful connection between epistemological and semantic questions. This connection carries over into the work of Quine in Two Dogmas of Empiricism.

Epistemology A branch of philosophy concerned with the nature and scope of knowledge

Epistemology is the branch of philosophy concerned with the theory of knowledge.

Quine's holistic argument against the neo-positivists set out to demolish the assumption that every sentence of a language is bound univocally to its own set of potential verifiers and falsifiers and the result was that the epistemological value of every sentence must depend on the entire language. Since the epistemological value of every sentence, for Quine just as for the positivists, was the meaning of that sentence, then the meaning of every sentence must depend on every other. As Quine states it:

All of our so-called knowledge or convictions, from questions of geography and history to the most profound laws of atomic physics or even mathematics and logic, are an edifice made by man that touches experience only at the margins. Or, to change images, science in its globality is like a force field whose limit points are experiences...a particular experience is never tied to any proposition inside the field except indirectly, for the needs of equilibrium which affect the field in its globality.

For Quine then (although Fodor and Lepore have maintained the contrary), and for many of his followers, confirmation holism and semantic holism are inextricably linked. Since confirmation holism is widely accepted among philosophers, a serious question for them has been to determine whether and how the two holisms can be distinguished or how the undesirable consequences of unbuttoned holism, as Michael Dummett has called it, can be limited.

In philosophy of science, confirmation holism, also called epistemological holism, is the view that no individual statement can be confirmed or disconfirmed by an empirical test, but only a set of statements.

Moderate holism

Numerous philosophers of language have taken the latter avenue, abandoning the early Quinean holism in favour of what Michael Dummett has labelled semantic molecularism. These philosophers generally deny that the meaning of an expression E depends on the meanings of the words of the entire language L of which it is part and sustain, instead, that the meaning of E depends on some subset of L. These positions, notwithstanding the fact that many of their proponents continue to call themselves holists, are actually intermediate between holism and atomism.

Dummett, for example, after rejecting Quinean holism (holism tout court in his sense), takes precisely this approach. But those who would opt for some version of moderate holism need to make the distinction between the parts of a language that are "constitutive" of the meaning of an expression E and those that are not without falling into the extraordinarily problematic analytic/synthetic distinction. Fodor and Lepore (1992) present several arguments to demonstrate that this is impossible.

Arguments against molecularism

According to Fodor and Lepore, there is a quantificational ambiguity in the molecularist's typical formulation of his thesis: someone can believe P only if she believes a sufficient number of other propositions. They propose to disambiguate this assertion into a strong and a weak version:

(S)
(W)

The first statement asserts that there are other propositions, besides p, that one must believe in order to believe p. The second says that one cannot believe p unless there are other propositions in which one believes. If one accepts the first reading, then one must accept the existence of a set of sentences that are necessarily believed and hence fall into the analytic/synthetic distinction. The second reading is useless (too weak) to serve the molecularist's needs since it only requires that if, say, two people believe the same proposition p, they also believe in at least one other proposition. But, in this way, each one will connect to p his own inferences and communication will remain impossible.

Carlo Penco criticizes this argument by pointing out that there is an intermediate reading Fodor and Lepore have left out of count:

(I)

This says that two people cannot believe the same proposition unless they also both believe a proposition different from p. This helps to some extent but there is still a problem in terms of identifying how the different propositions shared by the two speakers are specifically related to each other. Dummett's proposal is based on an analogy from logic. To understand a logically complex sentence it is necessary to understand one that is logically less complex. In this manner, the distinction between logically less complex sentences that are constitutive of the meaning of a logical constant and logically more complex sentences that are not takes on the role of the old analytic/synthetic distinction. "The comprehension of a sentence in which the logical constant does not figure as a principal operator depends on the comprehension of the constant, but does not contribute to its constitution." For example, one can explain the use of the conditional in by stating that the whole sentence is false if the part before the arrow is true and c is false. But to understand one must already know the meaning of "not" and "or." This is, in turn, explained by giving the rules of introduction for simple schemes such as and . To comprehend a sentence is to comprehend all and only the sentences of less logical complexity than the sentence that one is trying to comprehend. However, there is still a problem with extending this approach to natural languages. If I understand the word "hot" because I have understood the phrase "this stove is hot", it seems that I am defining the term by reference to a set of stereotypical objects with the property of being hot. If I don't know what it means for these objects to be "hot", such a set or listing of objects is not helpful.

Holism and compositionality

The relationship between compositionality and semantic holism has also been of interest to many philosophers of language. On the surface it would seem that these two ideas are in complete and irremediable contradiction. Compositionality is the principle that states that the meaning of a complex expression depends on the meaning of its parts and on its mode of composition. As stated before, holism, on the other hand, is the thesis that the meanings of expressions of a language are determined by their relations with the other expressions of the language as a whole. Peter Pagin, in an essay called Are Compositionality and Holism Compatible identifies three points of incompatibility between these two hypotheses. The first consists in the simple observation that while, for holism, the meaning of the whole would seem to precede that of its parts in terms of priority, for compositionality, the reverse is true, the meaning of the parts precedes that of the whole. The second incoherence consists in the fact that a necessity to attribute "strange" meanings to the components of larger expressions would apparently result from any attempt to reconcile compositionality and holism. Pagin takes a specific holistic theory of meaning – inferential role semantics, the theory according to which the meaning of an expression is determined by the inferences that it involves – as his paradigm of holism. If we interpret this theory holistically, the result will be that every accepted inference that involves some expression will enter into the meaning of that expression. Suppose, for example, that Fred believes that "Brown cows are dangerous". That is, he accepts the inference from "brown cows" to "dangerous." This entails that this inference is now part of the meaning of "brown cow." According to compositionality then, "cow implies dangerous" and "brown implies dangerous" are both true because they are the constituents of the expression "brown cow." But is this really an inevitable consequence of the acceptance of the holism of inferential role semantics? To see why it's not assume the existence of a relation of inference I between two expressions x and y and that the relation applies just in case F accepts the inference from x to y. Suppose that in the extension of I, there are the following pairs of expressions ("The sky is blue and leaves are green", "the sky is blue") and ("brown cow", "dangerous").

There is also a second relation P, which applies to two expressions just in case the first is part of the second. So, ("brown, "brown cow") belongs to the extension of P. Two more relations, "Left" and "Right", are required:

The first relation means that L applies between α,β and γ just in case α is a part of β and F accepts the inference between β and γ. The relation R applies between α, β, and γ just in case α is a part of γ and F accepts the inference from β to γ.

The Global Role, G(α), of a simple expression α can then be defined as:

The global role of consists in a pair of sets, each one composed of a pair of sets of expressions. If F accepts the inference from to and is a part of , then the couple is an element of the set which is an element of the right side of the Global Role of α. This makes Global Roles for simple expressions sensitive to changes in the acceptance of inferences by F. The Global Role for complex expressions can be defined as:

The Global Role of the complex expression β is the n- tuple of the global roles of its constituent parts. The next problem is to develop a function that assigns meanings to Global Roles. This function is generally called a homomorphism and says that for every syntactic function G that assigns to simple expressions α1...αn some complex expression β, there exists a function F from meanings to meanings:

This function is one to one in that it assigns exactly one meaning to every Global Role. According to Fodor and Lepore, holistic inferential role semantics leads to the absurd conclusion that part of the meaning of "brown cow" is constituted by the inference "Brown cow implies dangerous." This is true if the function from meanings to Global Roles is one to one. In this case, in fact, the meanings of "brown", "cow" and "dangerous" all contain the inference "Brown cows are dangerous"!! But this is only true if the relation is one to one. Since it is one to one, "brown" would not have the meaning it has unless it had the global role that it has. If we change the relation so that it is many to one (h*), many global roles can share the same meaning. So suppose that the meaning of "brown "is given by M("brown"). It does not follow from this that L("brown", "brown cow", "dangerous") is true unless all of the global roles that h* assigns to M("brown") contain ("brown cow", "dangerous"). And this is not necessary for holism. In fact, with this many to one relation from Global Roles to meanings, it is possible to change opinions with respect to an inference consistently. Suppose that B and C initially accept all of the same inferences, speak the same language and they both accept that "brown cows imply dangerous." Suddenly, B changes his mind and rejects the inference. If the function from meanings to Global Role is one to one, then many of B's Global Roles have changed and therefore their meanings. But if there is no one to one assignment, then B's change in belief in the inference about brown cows does not necessarily imply a difference in the meanings of the terms he uses. Therefore, it is not intrinsic to holism that communication or change of opinion is impossible.

Holism and externalism

Since the concept of semantic holism, as explained above, is often used to refer to theories of meaning in natural languages but also to theories of mental content such as the hypothesis of a language of thought, the question often arises as to how to reconcile the idea of semantic holism (in the sense of the meanings of expressions in mental languages) with the phenomenon called externalism in philosophy of mind. Externalism is the thesis that the propositional attitudes of an individual are determined, at least in part, by her relations with her environment (both social and natural). Hilary Putnam formulated the thesis of the natural externalism of mental states in his The Meaning of "Meaning". In it, he described his famous thought experiment involving Twin Earths: two individuals, Calvin and Carvin, live, respectively, on the real earth (E) of our everyday experience and on an exact copy (E') with the only difference being that on E "water" stands for the substance while on E' it stands for some substance macroscopically identical to water but which is actually composed of XYZ. According to Putnam, only Calvin has genuine experiences that involve water, so only his term "water" really refers to water.

Tyler Burge, in Individualism and the Mental, describes a different thought experiment that led to the notion of the social externalism of mental contents. In Burge's experiment, a person named Jeffray believes that he has arthritis in his thighs and we can correctly attribute to him the (mistaken) belief that he has arthritis in his thighs because he is ignorant of the fact that arthritis is a disease of the articulation of the joints. In another society, there is an individual named Goodfrey who also believes that he has arthritis in the thighs. But in the case of Goodfrey the belief is correct because in the counterfactual society in which he lives "arthritis" is defined as a disease that can include the thighs.

The question then arises of the possibility of reconciling externalism with holism. The one seems to be saying that meanings are determined by the external relations (with society or the world), while the other suggests that meaning is determined by the relation of words (or beliefs) to all the other words (or beliefs). Frederik Stjernfelt identifies at least three possible ways to reconcile them and then points out some objections.

The first approach is to insist that there is no conflict because holists do not mean the phrase "determine beliefs" in the sense of individuation but rather of attribution. But the problem with this is that if one is not a "realist" about mental states, then all we are left with is the attributions themselves and, if these are holistic, then we really have a form of hidden constitutive holism rather than a genuine attributive holism. But if one is a "realist" about mental states, then why not say that we can actually individuate them and therefore that instrumentalist attributions are just a short-term strategy?

Another approach is to say that externalism is valid only for certain beliefs and that holism only suggests that beliefs are determined only in part by their relations with other beliefs. In this way, it is possible to say that externalism applies only to those beliefs not determined by their relations with other beliefs (or for the part of a belief that is not determined by its relations with other parts of other beliefs), and holism is valid to the extent that beliefs (or parts of beliefs) are not determined externally. The problem here is that the whole scheme is based on the idea that certain relations are constitutive (i.e. necessary) for the determination of the beliefs and others are not. Thus, we have reintroduced the idea of an analytic/synthetic distinction with all of the problems that that carries with it.

A third possibility is to insist that there are two distinct types of belief: those determined holistically and those determined externally. Perhaps the external beliefs are those that are determined by their relations with the external world through observation and the holistic ones are the theoretical statements. But this implies the abandonment of a central pillar of holism: the idea that there can be no one to one correspondence between behavior and beliefs. There will be cases in which the beliefs that are determined externally correspond one to one with perceptual states of the subject.

One last proposal is to carefully distinguish between so-called narrow content states and broad content states. The first would be determined in a holistic manner and the second non-holistically and externalistically. But how to distinguish between the two notions of content while providing a justification of the possibility of formulating an idea of narrow content that does not depend on a prior notion of broad content?

These are some of the problems and questions that have still to be resolved by those who would adopt a position of "holistic externalism" or "externalist holism".

Related Research Articles

Gamma distribution probability distribution

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are three different parametrizations in common use:

  1. With a shape parameter k and a scale parameter θ.
  2. With a shape parameter α = k and an inverse scale parameter β = 1/θ, called a rate parameter.
  3. With a shape parameter k and a mean parameter μ = = α/β.

In programming language theory and proof theory, the Curry–Howard correspondence is the direct relationship between computer programs and mathematical proofs.

An infinitary logic is a logic that allows infinitely long statements and/or infinitely long proofs. Some infinitary logics may have different properties from those of standard first-order logic. In particular, infinitary logics may fail to be compact or complete. Notions of compactness and completeness that are equivalent in finitary logic sometimes are not so in infinitary logics. Therefore for infinitary logics, notions of strong compactness and strong completeness are defined. This article addresses Hilbert-type infinitary logics, as these have been extensively studied and constitute the most straightforward extensions of finitary logic. These are not, however, the only infinitary logics that have been formulated or studied.

In mathematics, and specifically differential geometry, a connection form is a manner of organizing the data of a connection using the language of moving frames and differential forms.

Lambda cube a set of 3 independent extensions (type-dependent terms, term-dependent types, type-dependent types) of simply typed λ-calculus, generating 8 different typed systems (λ→, System F, λω̱, Fω, λP, λP2, λPω̱, calculus of constructions) arranged i

In mathematical logic and type theory, the λ-cube is a framework introduced by Henk Barendregt to investigate the different dimensions in which the calculus of constructions is a generalization of the simply typed λ-calculus. Each dimension of the cube corresponds to a new kind of dependency between terms and types. Here, "dependency" refers to the capacity of a term or type to bind a term or type. The respective dimensions of the λ-cube correspond to:

Einstein tensor Tensor used in general relativity

In differential geometry, the Einstein tensor is used to express the curvature of a pseudo-Riemannian manifold. In general relativity, it occurs in the Einstein field equations for gravitation that describe spacetime curvature in a manner consistent with energy and momentum conservation.

Inverse-gamma distribution

In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according to the gamma distribution. Perhaps the chief use of the inverse gamma distribution is in Bayesian statistics, where the distribution arises as the marginal posterior distribution for the unknown variance of a normal distribution, if an uninformative prior is used, and as an analytically tractable conjugate prior, if an informative prior is required.

Beta prime distribution

In probability theory and statistics, the beta prime distribution is an absolutely continuous probability distribution defined for with two parameters α and β, having the probability density function:

Maxwells equations in curved spacetime electromagnetism in general relativity

In physics, Maxwell's equations in curved spacetime govern the dynamics of the electromagnetic field in curved spacetime or where one uses an arbitrary coordinate system. These equations can be viewed as a generalization of the vacuum Maxwell's equations which are normally formulated in the local coordinates of flat spacetime. But because general relativity dictates that the presence of electromagnetic fields induce curvature in spacetime, Maxwell's equations in flat spacetime should be viewed as a convenient approximation.

In natural language processing, latent Dirichlet allocation (LDA) is a generative statistical model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. For example, if observations are words collected into documents, it posits that each document is a mixture of a small number of topics and that each word's presence is attributable to one of the document's topics. LDA is an example of a topic model.

In logic, especially mathematical logic, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.

In mathematical logic, the rules of passage govern how quantifiers distribute over the basic logical connectives of first-order logic. The rules of passage govern the "passage" (translation) from any formula of first-order logic to the equivalent formula in prenex normal form, and vice versa.

In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.

In mathematics, Jacobi polynomialsP(α, β)
n
(x)
are a class of classical orthogonal polynomials. They are orthogonal with respect to the weight (1 − x)α(1 + x)β on the interval [−1, 1]. The Gegenbauer polynomials, and thus also the Legendre, Zernike and Chebyshev polynomials, are special cases of the Jacobi polynomials.

In probability theory, a beta negative binomial distribution is the probability distribution of a discrete random variable X equal to the number of failures needed to get r successes in a sequence of independent Bernoulli trials where the probability p of success on each trial is constant within any given experiment but is itself a random variable following a beta distribution, varying between different experiments. Thus the distribution is a compound probability distribution.

A Hindley–Milner (HM) type system is a classical type system for the lambda calculus with parametric polymorphism. It is also known as Damas–Milner or Damas–Hindley–Milner. It was first described by J. Roger Hindley and later rediscovered by Robin Milner. Luis Damas contributed a close formal analysis and proof of the method in his PhD thesis.

In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields. It is also the modern name for what used to be called the absolute differential calculus, developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century.

The Einstein–Hilbert action for general relativity was first formulated purely in terms of the space-time metric. To take the metric and affine connection as independent variables in the action principle was first considered by Palatini. It is called a first order formulation as the variables to vary over involve only up to first derivatives in the action and so doesn't overcomplicate the Euler–Lagrange equations with terms coming from higher derivative terms. The tetradic Palatini action is another first-order formulation of the Einstein–Hilbert action in terms of a different pair of independent variables, known as frame fields and the spin connection. The use of frame fields and spin connections are essential in the formulation of a generally covariant fermionic action which couples fermions to gravity when added to the tetradic Palatini action.

Pentagramma mirificum a star polygon on a sphere, composed of five great circle arcs, whose all internal angles are right angles.

Pentagramma mirificum is a star polygon on a sphere, composed of five great circle arcs, whose all internal angles are right angles. This shape was described by John Napier in his 1614 book Mirifici logarithmorum canonis descriptio along with rules that link the values of trigonometric functions of five parts of a right spherical triangle. The properties of pentagramma mirificum were studied, among others, by Carl Friedrich Gauss.

References