Coordinative definition

Last updated

A coordinative definition is a postulate which assigns a partial meaning to the theoretical terms of a scientific theory by correlating the mathematical objects of the pure or formal/syntactical aspects of a theory with physical objects in the world. The idea was formulated by the logical positivists and arises out of a formalist vision of mathematics as pure symbol manipulation.

Contents

Formalism

In order to get a grasp on the motivations which inspired the development of the idea of coordinative definitions, it is important to understand the doctrine of formalism as it is conceived in the philosophy of mathematics. For the formalists, mathematics, and particularly geometry, is divided into two parts: the pure and the applied. The first part consists in an uninterpreted axiomatic system, or syntactic calculus, in which terms such as point, straight line and between (the so-called primitive terms) have their meanings assigned to them implicitly by the axioms in which they appear. On the basis of deductive rules eternally specified in advance, pure geometry provides a set of theorems derived in a purely logical manner from the axioms. This part of mathematics is therefore a priori but devoid of any empirical meaning, not synthetic in the sense of Kant.

It is only by connecting these primitive terms and theorems with physical objects such as rulers or rays of light that, according to the formalist, pure mathematics becomes applied mathematics and assumes an empirical meaning. The method of correlating the abstract mathematical objects of the pure part of theories with physical objects consists in coordinative definitions.

It was characteristic of logical positivism to consider a scientific theory to be nothing more than a set of sentences, subdivided into the class of theoretical sentences, the class of observational sentences, and the class of mixed sentences. The first class contains terms which refer to theoretical entities, that is to entities not directly observable such as electrons, atoms and molecules; the second class contains terms which denote quantities or observable entities, and the third class consists of precisely the coordinative definitions which contain both types of terms because they connect the theoretical terms with empirical procedures of measurement or with observable entities. For example, the interpretation of "the geodesic between two points" as correspondent to "the path of a light ray in a vacuum" provides a coordinative definition. This is very similar to, but distinct from an operational definition. The difference is that coordinative definitions do not necessarily define theoretical terms in terms of laboratory procedures or experimentation, as operationalism does, but may also define them in terms of observable or empirical entities.

In any case, such definitions (also called bridge laws or correspondence rules) were held to serve three important purposes. In the first place, by connecting the uninterpreted formalism with the observation language, they permit the assignment of synthetic content to theories. In the second, according to whether they express a factual or a purely conventional content, they allow for the subdivision of science into two parts: one factual and independent of human conventions, the other non-empirical and conventional. This distinction is reminiscent of Kant's division of knowledge into content and form. Lastly, they allow for the possibility to avoid certain vicious circles that arise with regard to such matters as the measurement of the speed of light in one direction. As has been pointed out by John Norton with regard to Hans Reichenbach's arguments about the nature of geometry: on the one hand, we cannot know if there are universal forces until we know the true geometry of spacetime, but on the other, we cannot know the true geometry of spacetime until we know whether there are universal forces. Such a circle can be broken by way of coordinative definition.(Norton 1992).

From the point of view of the logical empiricist, in fact, the question of the "true geometry" of spacetime does not arise, given that saving, e.g., Euclidean geometry by introducing universal forces which cause rulers to contract in certain directions, or postulating that such forces are equal to zero, does not mean saving the Euclidean geometry of actual space, but only changing the definitions of the corresponding terms. There are not really two incompatible theories to choose between, in the case of the true geometry of spacetime, for the empiricist (Euclidean geometry with universal forces not equal to zero, or non-Euclidean geometry with universal forces equal to zero), but only one theory formulated in two different ways, with different meanings to attribute to the fundamental terms on the basis of coordinative definitions. However, given that, according to formalism, interpreted or applied geometry does have empirical content, the problem is not resolved on the basis of purely conventionalist considerations and it is precisely the coordinative definitions, which bear the burden of finding the correspondences between mathematical and physical objects, which provide the basis for an empirical choice.

Objection

The problem is that coordinative definitions seem to beg the question. Since they are defined in conventional, non-empirical terms, it is difficult to see how they can resolve empirical questions. It would seem that the result of using coordinative definitions is simply to shift the problem of the geometric description of the world, for example, into a need to explain the mysterious "isomorphic coincidences" between the conventions given by the definitions and the structure of the physical world. Even in the simple case of defining "the geodesic between two points" as the empirical phrase "a ray of light in a vacuum", the correspondence between mathematical and empirical is left unexplained.

Related Research Articles

An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word ἀξίωμα (axíōma), meaning 'that which is thought worthy or fit' or 'that which commends itself as evident'.

<span class="mw-page-title-main">Dimension</span> Property of a mathematical space

In physics and mathematics, the dimension of a mathematical space is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordinate is needed to specify a point on it – for example, the point at 5 on a number line. A surface, such as the boundary of a cylinder or sphere, has a dimension of two (2D) because two coordinates are needed to specify a point on it – for example, both a latitude and longitude are required to locate a point on the surface of a sphere. A two-dimensional Euclidean space is a two-dimensional space on the plane. The inside of a cube, a cylinder or a sphere is three-dimensional (3D) because three coordinates are needed to locate a point within these spaces.

<span class="mw-page-title-main">Logical positivism</span> Movement in Western philosophy

Logical positivism, later called logical empiricism, and both of which together are also known as neopositivism, is a movement whose central thesis is the verification principle. This theory of knowledge asserted that only statements verifiable through direct observation or logical proof are meaningful in terms of conveying truth value, information or factual content. Starting in the late 1920s, groups of philosophers, scientists, and mathematicians formed the Berlin Circle and the Vienna Circle, which, in these two cities, would propound the ideas of logical positivism.

M-theory is a theory in physics that unifies all consistent versions of superstring theory. Edward Witten first conjectured the existence of such a theory at a string theory conference at the University of Southern California in 1995. Witten's announcement initiated a flurry of research activity known as the second superstring revolution. Prior to Witten's announcement, string theorists had identified five versions of superstring theory. Although these theories initially appeared to be very different, work by many physicists showed that the theories were related in intricate and nontrivial ways. Physicists found that apparently distinct theories could be unified by mathematical transformations called S-duality and T-duality. Witten's conjecture was based in part on the existence of these dualities and in part on the relationship of the string theories to a field theory called eleven-dimensional supergravity.

In mathematics, logic, philosophy, and formal systems, a primitive notion is a concept that is not defined in terms of previously-defined concepts. It is often motivated informally, usually by an appeal to intuition and everyday experience. In an axiomatic theory, relations between primitive notions are restricted by axioms. Some authors refer to the latter as "defining" primitive notions by one or more axioms, but this can be misleading. Formal theories cannot dispense with primitive notions, under pain of infinite regress.

<span class="mw-page-title-main">Space</span> Framework of distances and directions

Space is a three-dimensional continuum containing positions and directions. In classical physics, physical space is often conceived in three linear dimensions. Modern physicists usually consider it, with time, to be part of a boundless four-dimensional continuum known as spacetime. The concept of space is considered to be of fundamental importance to an understanding of the physical universe. However, disagreement continues between philosophers over whether it is itself an entity, a relationship between entities, or part of a conceptual framework.

<span class="mw-page-title-main">Willard Van Orman Quine</span> American philosopher and logician (1908–2000)

Willard Van Orman Quine was an American philosopher and logician in the analytic tradition, recognized as "one of the most influential philosophers of the twentieth century". He served as the Edgar Pierce Chair of Philosophy at Harvard from 1956 to 1978.

The philosophy of mathematics is the branch of philosophy that studies the assumptions, foundations, and implications of mathematics. It aims to understand the nature and methods of mathematics, and find out the place of mathematics in people's lives. The logical and structural nature of mathematics makes this branch of philosophy broad and unique.

The shape of the universe, in physical cosmology, is the local and global geometry of the universe. The local features of the geometry of the universe are primarily described by its curvature, whereas the topology of the universe describes general global properties of its shape as a continuous object.

Foundations of mathematics is the study of the philosophical and logical and/or algorithmic basis of mathematics, or, in a broader sense, the mathematical investigation of what underlies the philosophical theories concerning the nature of mathematics. In this latter sense, the distinction between foundations of mathematics and philosophy of mathematics turns out to be vague. Foundations of mathematics can be conceived as the study of the basic mathematical concepts and how they form hierarchies of more complex structures and concepts, especially the fundamentally important structures that form the language of mathematics also called metamathematical concepts, with an eye to the philosophical aspects and the unity of mathematics. The search for foundations of mathematics is a central question of the philosophy of mathematics; the abstract nature of mathematical objects presents special philosophical challenges.

In philosophy of science and in epistemology, instrumentalism is a methodological view that ideas are useful instruments, and that the worth of an idea is based on how effective it is in explaining and predicting natural phenomena. According to instrumentalists, a successful scientific theory reveals nothing known either true or false about nature's unobservable objects, properties or processes. Scientific theory is merely a tool whereby humans predict observations in a particular domain of nature by formulating laws, which state or summarize regularities, while theories themselves do not reveal supposedly hidden aspects of nature that somehow explain these laws. Instrumentalism is a perspective originally introduced by Pierre Duhem in 1906.

Empirical evidence for a proposition is evidence, i.e. what supports or counters this proposition, that is constituted by or accessible to sense experience or experimental procedure. Empirical evidence is of central importance to the sciences and plays a role in various other fields, like epistemology and law.

In theoretical physics, geometrodynamics is an attempt to describe spacetime and associated phenomena completely in terms of geometry. Technically, its goal is to unify the fundamental forces and reformulate general relativity as a configuration space of three-metrics, modulo three-dimensional diffeomorphisms. The origin of this idea can be found in an English mathematician William Kingdon Clifford's works. This theory was enthusiastically promoted by John Wheeler in the 1960s, and work on it continues in the 21st century.

"Two Dogmas of Empiricism" is a paper by analytic philosopher Willard Van Orman Quine published in 1951. According to University of Sydney professor of philosophy Peter Godfrey-Smith, this "paper [is] sometimes regarded as the most important in all of twentieth-century philosophy". The paper is an attack on two central aspects of the logical positivists' philosophy: the first being the analytic–synthetic distinction between analytic truths and synthetic truths, explained by Quine as truths grounded only in meanings and independent of facts, and truths grounded in facts; the other being reductionism, the theory that each meaningful statement gets its meaning from some logical construction of terms that refer exclusively to immediate experience.

"Is Logic Empirical?" is the title of two articles that discuss the idea that the algebraic properties of logic may, or should, be empirically determined; in particular, they deal with the question of whether empirical facts about quantum phenomena may provide grounds for revising classical logic as a consistent logical rendering of reality. The replacement derives from the work of Garrett Birkhoff and John von Neumann on quantum logic. In their work, they showed that the outcomes of quantum measurements can be represented as binary propositions and that these quantum mechanical propositions can be combined in a similar way as propositions in classical logic. However, the algebraic properties of this structure are somewhat different from those of classical propositional logic in that the principle of distributivity fails.

The deductive-nomological model of scientific explanation, also known as Hempel's model, the Hempel–Oppenheim model, the Popper–Hempel model, or the covering law model, is a formal view of scientifically answering questions asking, "Why...?". The DN model poses scientific explanation as a deductive structure, one where truth of its premises entails truth of its conclusion, hinged on accurate prediction or postdiction of the phenomenon to be explained.

Ramsey sentences are formal logical reconstructions of theoretical propositions attempting to draw a line between science and metaphysics. A Ramsey sentence aims at rendering propositions containing non-observable theoretical terms clear by substituting them with observational terms.

The mathematics of general relativity is complex. In Newton's theories of motion, an object's length and the rate at which time passes remain constant while the object accelerates, meaning that many problems in Newtonian mechanics may be solved by algebra alone. In relativity, however, an object's length and the rate at which time passes both change appreciably as the object's speed approaches the speed of light, meaning that more variables and more complicated mathematics are required to calculate the object's motion. As a result, relativity requires the use of concepts such as vectors, tensors, pseudotensors and curvilinear coordinates.

The semantic view of theories is a position in the philosophy of science that holds that a scientific theory can be identified with a collection of models. The semantic view of theories was originally proposed by Patrick Suppes in “A Comparison of the Meaning and Uses of Models in Mathematics and the Empirical Sciences” as a reaction against the received view of theories popular among the logical positivists. Many varieties of the semantic view propose identifying theories with a class of set-theoretic models in the Tarskian sense, while others specify models in the mathematical language stipulated by the field of which the theory is a member.

In the philosophy of science, structuralism asserts that all aspects of reality are best understood in terms of empirical scientific constructs of entities and their relations, rather than in terms of concrete entities in themselves.

References

Further reading