Instrumentalism

Last updated

In philosophy of science and in epistemology, instrumentalism is a methodological view that ideas are useful instruments, and that the worth of an idea is based on how effective it is in explaining and predicting natural phenomena. According to instrumentalists, a successful scientific theory reveals nothing known either true or false about nature's unobservable objects, properties or processes. [1] Scientific theory is merely a tool whereby humans predict observations in a particular domain of nature by formulating laws, which state or summarize regularities, while theories themselves do not reveal supposedly hidden aspects of nature that somehow explain these laws. [2] Instrumentalism is a perspective originally introduced by Pierre Duhem in 1906. [2]

Contents

Rejecting scientific realism's ambitions to uncover metaphysical truth about nature, [2] instrumentalism is usually categorized as an antirealism , although its mere lack of commitment to scientific theory's realism can be termed nonrealism. Instrumentalism merely bypasses debate concerning whether, for example, a particle spoken about in particle physics is a discrete entity enjoying individual existence, or is an excitation mode of a region of a field, or is something else altogether. [3] [4] [5] Instrumentalism holds that theoretical terms need only be useful to predict the phenomena, the observed outcomes. [3]

There are multiple versions of instrumentalism.

History

British empiricism

Newton's theory of motion, whereby any object instantly interacts with all other objects across the universe, motivated the founder of British empiricism, John Locke, to speculate that matter is capable of thought. [6] The next leading British empiricist, George Berkeley, argued that an object's putative primary qualities as recognized by scientists, such as shape, extension, and impenetrability, are inconceivable without the putative secondary qualities of color, hardness, warmth, and so on. He also posed the question how or why an object could be properly conceived to exist independently of any perception of it. [7] Berkeley did not object to everyday talk about the reality of objects, but instead took issue with philosophers' talk, who spoke as if they knew something beyond sensory impressions that ordinary folk did not. [8]

For Berkeley, a scientific theory does not state causes or explanations, but simply identifies perceived types of objects and traces their typical regularities. [8] Berkeley thus anticipated the basis of what Auguste Comte in the 1830s called positivism , [8] although Comtean positivism added other principles concerning the scope, method, and uses of science that Berkeley would have disavowed. Berkeley also noted the usefulness of a scientific theory having terms that merely serve to aid calculations without their having to refer to anything in particular, so long as they proved useful in practice. [8] Berkeley thus predated the insight that logical positivists—who originated in the late 1920s, but who, by the 1950s, had softened into logical empiricists—would be compelled to accept: theoretical terms in science do not always translate into observational terms. [9]

The last great British empiricist, David Hume, posed a number of challenges to Francis Bacon's inductivism, which had been the prevailing, or at least the professed view concerning the attainment of scientific knowledge. Regarding himself as having placed his own theory of knowledge on par with Newton's theory of motion, Hume supposed that he had championed inductivism over scientific realism. Upon reading Hume's work, Immanuel Kant was "awakened from dogmatic slumber", and thus sought to neutralise any threat to science posed by Humean empiricism. Kant would develop the first stark philosophy of physics. [10]

Transcendental idealism

To save Newton's law of universal gravitation, Immanuel Kant reasoned that the mind is the precondition of experience and so, as the bridge from the noumena, which are how the world's things exist in themselves, to the phenomena, which are humans' recognized experiences. And so mind itself contains the structure that determines space, time, and substance, how mind's own categorization of noumena renders space Euclidean, time constant, and objects' motions exhibiting the very determinism predicted by Newtonian physics. Kant apparently presumed that the human mind, rather than a phenomenon itself that had evolved, had been predetermined and set forth upon the formation of humankind. In any event, the mind also was the veil of appearance that scientific methods could never lift. And yet the mind could ponder itself and discover such truths, although not on a theoretical level, but only by means of ethics. Kant's metaphysics, then, transcendental idealism, secured science from doubt—in that it was a case of "synthetic a priori" knowledge ("universal, necessary and informative")—and yet discarded hope of scientific realism.

Logical empiricism

Since the mind has virtually no power to know anything beyond direct sensory experience, Ernst Mach's early version of logical positivism (empirio-criticism) verged on idealism. It was alleged to even be a surreptitious solipsism, whereby all that exists is one's own mind. Mach's positivism also strongly asserted the ultimate unity of the empirical sciences. Mach's positivism asserted phenomenalism as to new basis of scientific theory, all scientific terms to refer to either actual or potential sensations, thus eliminating hypotheses while permitting such seemingly disparate scientific theories as physical and psychological to share terms and forms. Phenomenalism was insuperably difficult to implement, yet heavily influenced a new generation of philosophers of science, who emerged in the 1920s while terming themselves logical positivists while pursuing a program termed verificationism . Logical positivists aimed not to instruct or restrict scientists, but to enlighten and structure philosophical discourse to render scientific philosophy that would verify philosophical statements as well as scientific theories, and align all human knowledge into a scientific worldview, freeing humankind from so many of its problems due to confused or unclear language.

The verificationists expected a strict gap between theory versus observation, mirrored by a theory's theoretical terms versus observable terms . Believing a theory's posited unobservables to always correspond to observations, the verificationists viewed a scientific theory's theoretical terms, such as electron, as metaphorical or elliptical at observations, such as white streak in cloud chamber . They believed that scientific terms lacked meanings unto themselves, but acquired meanings from the logical structure that was the entire theory that in turn matched patterns of experience. So by translating theoretical terms into observational terms and then decoding the theory's mathematical/logical structure, one could check whether the statement indeed matched patterns of experience, and thereby verify the scientific theory false or true. Such verification would be possible, as never before in science, since translation of theoretical terms into observational terms would make the scientific theory purely empirical, none metaphysical. Yet the logical positivists ran into insuperable difficulties. Moritz Schlick debated with Otto Neurath over foundationalism—the traditional view traced to Descartes as founder of modern Western philosophy—whereupon only nonfoundationalism was found tenable. Science, then, could not find a secure foundation of indubitable truth.

And since science aims to reveal not private but public truths, verificationists switched from phenomenalism to physicalism, whereby scientific theory refers to objects observable in space and at least in principle already recognizable by physicists. Finding strict empiricism untenable, verificationism underwent "liberalization of empiricism". Rudolf Carnap even suggested that empiricism's basis was pragmatic. Recognizing that verification—proving a theory false or true—was unattainable, they discarded that demand and focused on confirmation theory. Carnap sought simply to quantify a universal law's degree of confirmation—its probable truth—but, despite his great mathematical and logical skill, discovered equations never operable to yield over zero degree of confirmation. Carl Hempel found the paradox of confirmation. By the 1950s, the verificationists had established philosophy of science as subdiscipline within academia's philosophy departments. By 1962, verificationists had asked and endeavored to answer seemingly all the great questions about scientific theory. Their discoveries showed that the idealized scientific worldview was naively mistaken. By then the leader of the legendary venture, Hempel raised the white flag that signaled verificationism's demise. Suddenly striking Western society, then, was Kuhn's landmark thesis, introduced by none other than Carnap, verificationism's greatest firebrand. Instrumentalism exhibited by scientists often does not even discern unobservable from observable entities. [3]

Historical turn

From the 1930s until Thomas Kuhn's 1962 The Structure of Scientific Revolutions , there were roughly two prevailing views about the nature of science. The popular view was scientific realism, which usually involved a belief that science was progressively unveiling a truer view, and building a better understanding, of nature. The professional approach was logical empiricism, wherein a scientific theory was held to be a logical structure whose terms all ultimately refer to some form of observation, while an objective process neutrally arbitrates theory choice, compelling scientists to decide which scientific theory was superior. Physicists knew better, but, busy developing the Standard Model, were so steeped in developing quantum field theory, that their talk, largely metaphorical, perhaps even metaphysical, was unintelligible to the public, while the steep mathematics warded off philosophers of physics. [4] By the 1980s, physicists regarded not particles, but fields as the more fundamental, and no longer even hoped to discover what entities and processes might be truly fundamental to nature, perhaps not even the field. [4] [5] Kuhn had not claimed to have developed a novel thesis, but instead hoped to synthesize more usefully recent developments in the philosophy and history of science.

Scientific realism

One scientific realist, Karl Popper, rejected all variants of positivism via its focus on sensations rather than realism, and developed critical rationalism instead. Popper alleged that instrumentalism reduces basic science to what is merely applied science. [11] The British physicist David Deutsch, in his much later 1997 book The Fabric of Reality , followed Popper's critique of instrumentalism and argued that a scientific theory stripped of its explanatory content would be of strictly limited utility. [12]

Constructive empiricism as a form of instrumentalism

Bas van Fraassen's (1980) [13] project of constructive empiricism focuses on belief in the domain of the observable, so for this reason it is described as a form of instrumentalism. [14]

In the philosophy of mind

In the philosophy of mind, instrumentalism is the view that propositional attitudes like beliefs are not actually concepts on which we can base scientific investigations of mind and brain, but that acting as if other beings have beliefs is a successful strategy.

Relation to pragmatism

Instrumentalism is closely related to pragmatism, the position that practical consequences are an essential basis for determining meaning, truth or value.

Notable proponents

See also

Notes

    • Anjan Chakravartty, Chakravartty, Anjan (April 27, 2011). "Scientific Realism" . Retrieved August 13, 2019 via plato.stanford.edu.{{cite journal}}: Cite journal requires |journal= (help), §4 "Antirealism: Foils for scientific realism: §4.1: "Empiricism", in Edward N. Zalta, ed, The Stanford Encyclopedia of Philosophy , Summer 2013 edn: "Traditionally, instrumentalists maintain that terms for unobservables, by themselves, have no meaning; construed literally, statements involving them are not even candidates for truth or falsity. The most influential advocates of instrumentalism were the logical empiricists (or logical positivists), including Carnap and Hempel, famously associated with the Vienna Circle group of philosophers and scientists as well as important contributors elsewhere. In order to rationalize the ubiquitous use of terms which might otherwise be taken to refer to unobservables in scientific discourse, they adopted a non-literal semantics according to which these terms acquire meaning by being associated with terms for observables (for example, 'electron' might mean 'white streak in a cloud chamber'), or with demonstrable laboratory procedures (a view called 'operationalism'). Insuperable difficulties with this semantics led ultimately (in large measure) to the demise of logical empiricism and the growth of realism. The contrast here is not merely in semantics and epistemology: a number of logical empiricists also held the neo-Kantian view that ontological questions 'external' to the frameworks for knowledge represented by theories are also meaningless (the choice of a framework is made solely on pragmatic grounds), thereby rejecting the metaphysical dimension of realism (as in Carnap 1950)".
    • Samir Okasha, Philosophy of Science: A Very Short Introduction (New York: Oxford University Press, 2002), p. 62: "Strictly we should distinguish two sorts of anti-realism. According to the first sort, talk of unobservable entities is not to be understood literally at all. So when a scientist pus forward a theory about electrons, for example, we should not take him to be asserting the existence of entities called 'electrons'. Rather, his talk of electrons is metaphorical. This form of anti-realism was popular in the first half of the 20th century, but few people advocate it today. It was motivated largely by a doctrine in the philosophy of language, according to which it is not possible to make meaningful assertions about things that cannot in principle be observed, a doctrine that few contemporary philosophers accept. The second sort of anti-realism accepts that talk of unobservable entities should be taken at face value: if a theory says that electrons are negatively charged, it is true if electrons do exist and are negatively charged, but false otherwise. But we will never know which, says the anti-realist. So the correct attitude towards the claims that scientists make about unobservable reality is one of total agnosticism. They are either true or false, but we are incapable of finding out which. Most modern anti-realism is of this second sort".
  1. 1 2 3 Roberto Torretti, The Philosophy of Physics (Cambridge: Cambridge University Press, 1999), pp. 242–43: "Like Whewell and Mach, Duhem was a practicing scientist who devoted an important part of his adult life to the history and philosophy of physics. ... His philosophy is contained in La théorie physique: son objet, sa structure [The Aim and Structure of Physical Theory] (1906), which may well be, to this day, the best overall book on the subject. Its main theses, although quite novel when first put forward, have in the meantime become commonplace, so I shall review them summarily without detailed argument, just to associate them with his name. But first I ought to say that neither in the first nor in the second (1914) edition of his book did Duhem take into account—or even so much as mention—the deep changes that were then taking place in physics. Still, the subsequent success and current entrenchment of Duhem's ideas are due above all to their remarkable agreement with—and the light they throw on—the practice of mathematical physics in the twentieth century. In the first part of La théorie physique, Duhem contrasts two opinions concerning the aim of physical theory. For some authors, it ought to furnish 'the explanation of a set of experimentally established laws', while for others it is 'an abstract system whose aim is to summarize and logically classify a set of experimental laws, without pretending to explain these laws' (Duhem 1914, p. 3). Duhem resolutely sides with the latter. His rejection of the former rests on his understanding of 'explanation' ('explication' in French), which he expresses as follows: 'To explain, explicare, is to divest reality from the appearances which enfold it like veils, in order to see the reality face to face' (pp 3–4). Authors in the first group expect from physics the true vision of things-in-themselves that religious myth and philosophical speculation have hitherto been unable to supply. Their explanation makes no sense unless (i) there is, 'beneath the sense appearances revealed to us by our perceptions, [...] a reality different from these appearances' and (ii) we know 'the nature of the elements which constitute' that reality (p 7). Thus, physical theory cannot explain—in the stated sense—the laws established by experiment unless it depends on metaphysics and thus remains subject to the interminable disputes of metaphysicians. Worse still, the teachings of no metaphysical school are sufficiently detailed and precise to account for all of the elements of physical theory (p 18). Duhem instead assigns to physical theories a more modest but autonomous and readily attainable aim: 'A physical theory is not an explanation. It is a system of mathematical propositions, derived from a small number of principles, whose purpose is to represent a set of experimental laws as simply, as completely, and as exactly as possible (Duhem 1914, p. 24)".
  2. 1 2 3 P Kyle Stanford, Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives (New York: Oxford University Press, 2006), p. 198.
  3. 1 2 3 Roberto Torretti, The Philosophy of Physics (Cambridge: Cambridge University Press, 1999), pp. 396–97, including quote: "First, quantum field theories have been the working theories at the frontline of physics for over 30 years. Second, these theories appear to do away with the familiar conception of physical systems as aggregates of substantive individual particles. This conception was already undermined by Bose–Einstein and Fermi–Dirac statistics (§6.1.4), according to which the so-called particles cannot be assigned a definite trajectory in ordinary space. But quantum field theories go a long step further and—or so it would seem—conceive 'particles' as excitation modes of the field. This, I presume, motivated Howard Stein's saying that 'the quantum theory of fields is the contemporary locus of metaphysical research' (1970, p. 285). Finally, the very fact that physicists conspicuously and fruitfully resort to unperspicacious theories can teach us something about the aim and reach of science. Here is how physicists work, dirty-handed, in their everyday practice, a far cry from what is taught at the Sunday school of the 'scientific worldview' ".
  4. 1 2 Meinard Kuhlmann, "Physicists debate whether the world is made of particles or fields—or something else entirely, Scientific American, 2013 Aug;309(2).
  5. Torretti 1999 p. 75.
  6. Torretti 1999 p. 101–02.
  7. 1 2 3 4 Torretti 1999 p. 102.
  8. Torretti 1999 p. 103.
  9. Torretti 1999 p. 98: "I shall dwell at some length on Kant's conception of the sources and scope of Newton's conceptual frame, for it was the first full-blown philosophy of physics and remains to this day the most significant".
  10. Karl R Popper, Conjectures and Refutations: The Growth of Scientific Knowledge (London: Routledge, 2003 [1963]), ISBN   0-415-28594-1, quote: "Instrumentalism can be formulated as the thesis that scientific theories—the theories of the so-called 'pure' sciences—are nothing but computational rules (or inference rules); of the same character, fundamentally, as the computation rules of the so-called 'applied' sciences. (One might even formulate it as the thesis that "pure" science is a misnomer, and that all science is 'applied'.) Now my reply to instrumentalism consists in showing that there are profound differences between "pure" theories and technological computation rules, and that instrumentalism can give a perfect description of these rules but is quite unable to account for the difference between them and the theories".
  11. Deutsch, David, 1953- (1997). The fabric of reality : the science of parallel universes-- and its implications (First American ed.). New York, New York. ISBN   0-7139-9061-9. OCLC   36393434.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: multiple names: authors list (link)
  12. van Fraassen, Bas C., 1980, The Scientific Image, Oxford: Oxford University Press.
  13. Chakravartty, Anjan (August 13, 2017). Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved August 13, 2019 via Stanford Encyclopedia of Philosophy.
  14. 1 2 Gouinlock, James, "What is the Legacy of Instrumentalism? Rorty's Interpretation of Dewey." In Herman J. Saatkamp, ed., Rorty and Pragmatism. Nashville, TN: Vanderbilt University Press, 1995.

Sources

Related Research Articles

<span class="mw-page-title-main">Empiricism</span> Idea that knowledge comes only/mainly from sensory experience

In philosophy, empiricism is an epistemological view that holds that true knowledge or justification comes only or primarily from sensory experience. It is one of several competing views within epistemology, along with rationalism and skepticism. Empiricism emphasizes the central role of empirical evidence in the formation of ideas, rather than innate ideas or traditions. However, empiricists may argue that traditions arise due to relations of previous sensory experiences.

<span class="mw-page-title-main">Logical positivism</span> Movement in Western philosophy

Logical positivism, later called logical empiricism, and both of which together are also known as neopositivism, is a movement whose central thesis is the verification principle. This theory of knowledge asserted that only statements verifiable through direct observation or logical proof are meaningful in terms of conveying truth value, information or factual content. Starting in the late 1920s, groups of philosophers, scientists, and mathematicians formed the Berlin Circle and the Vienna Circle, which, in these two cities, would propound the ideas of logical positivism.

<span class="mw-page-title-main">Metaphysics</span> Branch of philosophy dealing with reality

Metaphysics is the branch of philosophy that studies the fundamental nature of reality. This includes the first principles of: being or existence, identity, change, space and time, cause and effect, necessity, actuality, and possibility.

<span class="mw-page-title-main">Willard Van Orman Quine</span> American philosopher and logician (1908–2000)

Willard Van Orman Quine was an American philosopher and logician in the analytic tradition, recognized as "one of the most influential philosophers of the twentieth century". He served as the Edgar Pierce Chair of Philosophy at Harvard from 1956 to 1978.

<span class="mw-page-title-main">Philosophy of science</span> Study of foundations, methods, and implications of science

Philosophy of science is a branch of philosophy concerned with the foundations, methods, and implications of science. The central questions of this study concern what qualifies as science, the reliability of scientific theories, and the ultimate purpose of science. This discipline overlaps with metaphysics, ontology, and epistemology, for example, when it explores the relationship between science and truth. Philosophy of science focuses on metaphysical, epistemic and semantic aspects of science. Ethical issues such as bioethics and scientific misconduct are often considered ethics or science studies rather than the philosophy of science.

<span class="mw-page-title-main">Vienna Circle</span> 1924–1936 group of philosophers and scientists

The Vienna Circle of logical empiricism was a group of elite philosophers and scientists drawn from the natural and social sciences, logic and mathematics who met regularly from 1924 to 1936 at the University of Vienna, chaired by Moritz Schlick. The Vienna Circle had a profound influence on 20th-century philosophy, especially philosophy of science and analytic philosophy.

Scientific realism is the view that the universe described by science is real regardless of how it may be interpreted. A believer of scientific realism takes the universe as described by science to be true, because of their assertion that science can be used to find the truth about both the physical and metaphysical in the Universe.

An unobservable is an entity whose existence, nature, properties, qualities or relations are not directly observable by humans. In philosophy of science, typical examples of "unobservables" are the force of gravity, causation and beliefs or desires. The distinction between observable and unobservable plays a central role in Immanuel Kant's distinction between noumena and phenomena as well as in John Locke's distinction between primary and secondary qualities. The theory that unobservables posited by scientific theories exist is referred to as scientific realism. It contrasts with instrumentalism, which asserts that we should withhold ontological commitments to unobservables even though it is useful for scientific theories to refer to them. There is considerable disagreement about which objects should be classified as unobservable, for example, whether bacteria studied using microscopes or positrons studied using cloud chambers count as unobservable. Different notions of unobservability have been formulated corresponding to different types of obstacles to their observation.

Legal positivism is a school of thought of analytical jurisprudence developed largely by legal philosophers during the 18th and 19th centuries, such as Jeremy Bentham and John Austin. While Bentham and Austin developed legal positivist theory, empiricism provided the theoretical basis for such developments to occur. The most prominent legal positivist writer in English has been H. L. A. Hart, who, in 1958, found common usages of "positivism" as applied to law to include the contentions that:

Empirical evidence for a proposition is evidence, i.e. what supports or counters this proposition, that is constituted by or accessible to sense experience or experimental procedure. Empirical evidence is of central importance to the sciences and plays a role in various other fields, like epistemology and law.

"Two Dogmas of Empiricism" is a paper by analytic philosopher Willard Van Orman Quine published in 1951. According to University of Sydney professor of philosophy Peter Godfrey-Smith, this "paper [is] sometimes regarded as the most important in all of twentieth-century philosophy". The paper is an attack on two central aspects of the logical positivists' philosophy: the first being the analytic–synthetic distinction between analytic truths and synthetic truths, explained by Quine as truths grounded only in meanings and independent of facts, and truths grounded in facts; the other being reductionism, the theory that each meaningful statement gets its meaning from some logical construction of terms that refer exclusively to immediate experience.

In philosophy of science, constructive empiricism is a form of empiricism. While it is sometimes referred to as an empiricist form of structuralism, its main proponent, Bas van Fraassen, has consistently distinguished between the two views.

Bastiaan Cornelis van Fraassen is a Dutch-American philosopher noted for his contributions to philosophy of science, epistemology and formal logic. He is a Distinguished Professor of Philosophy at San Francisco State University and the McCosh Professor of Philosophy Emeritus at Princeton University.

The deductive-nomological model of scientific explanation, also known as Hempel's model, the Hempel–Oppenheim model, the Popper–Hempel model, or the covering law model, is a formal view of scientifically answering questions asking, "Why...?". The DN model poses scientific explanation as a deductive structure, one where truth of its premises entails truth of its conclusion, hinged on accurate prediction or postdiction of the phenomenon to be explained.

<span class="mw-page-title-main">Positivism</span> Empiricist philosophical theory

Positivism is a philosophical school that holds that all genuine knowledge is either true by definition or positive—meaning a posteriori facts derived by reason and logic from sensory experience. Other ways of knowing, such as intuition, introspection, or religious faith, are rejected or considered meaningless.

Verificationism, also known as the verification principle or the verifiability criterion of meaning, is the philosophical doctrine which asserts that a statement is meaningful only if it is either empirically verifiable or a truth of logic.

Ramsey sentences are formal logical reconstructions of theoretical propositions attempting to draw a line between science and metaphysics. A Ramsey sentence aims at rendering propositions containing non-observable theoretical terms clear by substituting them with observational terms.

The analytic–synthetic distinction is a semantic distinction used primarily in philosophy to distinguish between propositions that are of two types: analytic propositions and synthetic propositions. Analytic propositions are true or not true solely by virtue of their meaning, whereas synthetic propositions' truth, if any, derives from how their meaning relates to the world.

Inductivism is the traditional and still commonplace philosophy of scientific method to develop scientific theories. Inductivism aims to neutrally observe a domain, infer laws from examined cases—hence, inductive reasoning—and thus objectively discover the sole naturally true theory of the observed.

An index list of articles about the philosophy of science.