Theory choice

Last updated

Theory choice was a main problem in the philosophy of science in the early 20th century, and under the impact of the new and controversial theories of relativity and quantum physics, came to involve how scientists should choose between competing theories.

The classical answer would be to select the theory which was best verified, against which Karl Popper argued that competing theories should be subjected to comparative tests and the one chosen which survived the tests. If two theories could not, for practical reasons, be tested one should prefer the one with the highest degree of empirical content, said Popper in The Logic of Scientific Discovery .

Mathematician and physicist Henri Poincaré instead, like many others, proposed simplicity as a criterion. [1] One should choose the mathematically simplest or most elegant approach. Many have sympathized with this view, but the problem is that the idea of simplicity is highly intuitive and even personal, and that no one has managed to formulate it in precise and acceptable terms.

Popper's solution was subsequently criticized by Thomas S. Kuhn in The Structure of Scientific Revolutions . He denied that competing theories (or paradigms) could be compared in the way that Popper had claimed, and substituted instead what can be briefly described as pragmatic success. This led to an intense discussion with Imre Lakatos and Paul Feyerabend the best known participants.

The discussion has continued, but no general and uncontroversial solution to the problem of formulating objective criteria to decide which is the best theory has so far been formulated. The main criteria usually proposed are to choose the theory which provides the best (and novel) predictions, the one with the highest explanatory potential, the one which offers better problems or the most elegant and simple one. Alternatively a theory may be preferable if it is better integrated into the rest of contemporary knowledge.

  1. Keuzenkamp, Hugo A.; McAleer, Michael (1 January 1995). "Simplicity, Scientific Inference and Econometric Modelling" (PDF). The Economic Journal. 105 (428): 1–21. doi:10.2307/2235317. JSTOR   2235317.


Related Research Articles

Falsifiability Property of a theory/hypothesis/statement that can be logically contradicted by an empirical test

Falsifiability is a standard of evaluation of scientific theories and hypotheses that was introduced by the philosopher of science Karl Popper in his book The Logic of Scientific Discovery (1934). He proposed it as the cornerstone of a solution to both the problem of induction and the problem of demarcation. A theory or hypothesis is falsifiable if it can be logically contradicted by an empirical test that can potentially be executed with existing technologies. Popper insisted that, as a logical criterion, it is distinct from the related concept "capacity to be proven wrong" discussed in Lakatos's falsificationism. Even being a logical criterion, its purpose is to make the theory predictive and testable, thus useful in practice.

Karl Popper Austrian-British philosopher of science (1902–1994)

Sir Karl Raimund Popper was an Austrian-British philosopher, academic and social commentator. One of the 20th century's most influential philosophers of science, Popper is known for his rejection of the classical inductivist views on the scientific method in favour of empirical falsification. According to Popper, a theory in the empirical sciences can never be proven, but it can be falsified, meaning that it can be scrutinised with decisive experiments. Popper was opposed to the classical justificationist account of knowledge, which he replaced with critical rationalism, namely "the first non-justificational philosophy of criticism in the history of philosophy".

Pseudoscience Unscientific claims wrongly presented as scientific

Pseudoscience consists of statements, beliefs, or practices that claim to be both scientific and factual but are incompatible with the scientific method. Pseudoscience is often characterized by contradictory, exaggerated or unfalsifiable claims; reliance on confirmation bias rather than rigorous attempts at refutation; lack of openness to evaluation by other experts; absence of systematic practices when developing hypotheses; and continued adherence long after the pseudoscientific hypotheses have been experimentally discredited.

Occam's razor, Ockham's razor, or Ocham's razor, also known as the principle of parsimony or the law of parsimony, is the problem-solving principle that "entities should not be multiplied beyond necessity". It is generally understood in the sense that with competing theories or explanations, the simpler one, for example a model with fewer parameters, is to be preferred. The idea is frequently attributed to English Franciscan friar William of Ockham, a scholastic philosopher and theologian, although he never used these words. This philosophical razor advocates that when presented with competing hypotheses about the same prediction, one should select the solution with the fewest assumptions, and that this is not meant to be a way of choosing between hypotheses that make different predictions.

Philosophy of science Study of assumptions/bases/implications of science

Philosophy of science is a branch of philosophy concerned with the foundations, methods, and implications of science. The central questions of this study concern what qualifies as science, the reliability of scientific theories, and the ultimate purpose of science. This discipline overlaps with metaphysics, ontology, and epistemology, for example, when it explores the relationship between science and truth. Philosophy of science focuses on metaphysical, epistemic and semantic aspects of science. Ethical issues such as bioethics and scientific misconduct are often considered ethics or science studies rather than the philosophy of science.

Simulated annealing Probabilistic optimization technique and metaheuristic

Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. It is often used when the search space is discrete. For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to exact algorithms such as gradient descent or branch and bound.

Problem of induction Question of whether inductive reasoning leads to definitive knowledge

The problem of induction is the philosophical question of what are the justifications, if any, for any growth of knowledge understood in the classic philosophical sense—knowledge that goes beyond a mere collection of observations—highlighting the apparent lack of justification in particular for:

  1. Generalizing about the properties of a class of objects based on some number of observations of particular instances of that class or
  2. Presupposing that a sequence of events in the future will occur as it always has in the past. Hume called this the principle of uniformity of nature.

A scientific theory is an explanation of an aspect of the natural world and universe that has been repeatedly tested and corroborated in accordance with the scientific method, using accepted protocols of observation, measurement, and evaluation of results. Where possible, theories are tested under controlled conditions in an experiment. In circumstances not amenable to experimental testing, theories are evaluated through principles of abductive reasoning. Established scientific theories have withstood rigorous scrutiny and embody scientific knowledge.

Scientific evidence is evidence that serves to either support or counter a scientific theory or hypothesis, although scientists also use evidence in other ways, such as when applying theories to practical problems. Such evidence is expected to be empirical evidence and interpretable in accordance with scientific methods. Standards for scientific evidence vary according to the field of inquiry, but the strength of scientific evidence is generally based on the results of statistical analysis and the strength of scientific controls.

Critical rationalism is an epistemological philosophy advanced by Karl Popper on the basis that, if a statement cannot be logically deduced, it might nevertheless be possible to logically falsify it. Following Hume, Popper rejected any inductive logic that is ampliative, i.e., any logic that can provide more knowledge than deductive logic. So, the idea is that, if we cannot get it logically, we should at the least try to logically falsify it, which led Popper to his falsifiability criterion. Popper wrote about critical rationalism in many works, including: The Logic of Scientific Discovery (1934/1959), The Open Society and its Enemies (1945), Conjectures and Refutations (1963), Unended Quest (1976), and The Myth of the Framework (1994).

In philosophy of science and epistemology, the demarcation problem is the question of how to distinguish between science and non-science. It examines the boundaries between science, pseudoscience, and other products of human activity, like art and literature, and beliefs. The debate continues after over two millennia of dialogue among philosophers of science and scientists in various fields. The debate has consequences for what can be called "scientific" in fields such as education and public policy.

Commensurability is a concept in the philosophy of science whereby scientific theories are said to be "commensurable" if scientists can discuss the theories using a shared nomenclature that allows direct comparison of them to determine which one is more valid or useful. On the other hand, theories are incommensurable if they are embedded in starkly contrasting conceptual frameworks whose languages do not overlap sufficiently to permit scientists to directly compare the theories or to cite empirical evidence favoring one theory over the other. Discussed by Ludwik Fleck in the 1930s, and popularized by Thomas Kuhn in the 1960s, the problem of incommensurability results in scientists talking past each other, as it were, while comparison of theories is muddled by confusions about terms, contexts and consequences.

In philosophy, verisimilitude is the notion that some propositions are closer to being true than other propositions. The problem of verisimilitude is the problem of articulating what it takes for one false theory to be closer to the truth than another false theory.

Models of scientific inquiry have two functions: first, to provide a descriptive account of how scientific inquiry is carried out in practice, and second, to provide an explanatory account of why scientific inquiry succeeds as well as it appears to do in arriving at genuine knowledge.

The search for scientific knowledge ends far back into antiquity. At some point in the past, at least by the time of Aristotle, philosophers recognized that a fundamental distinction should be drawn between two kinds of scientific knowledge—roughly, knowledge that and knowledge why. It is one thing to know that each planet periodically reverses the direction of its motion with respect to the background of fixed stars; it is quite a different matter to know why. Knowledge of the former type is descriptive; knowledge of the latter type is explanatory. It is explanatory knowledge that provides scientific understanding of the world.

Antireductionism is the position in science and metaphysics that stands in contrast to reductionism (anti-holism) by advocating that not all properties of a system can be explained in terms of its constituent parts and their interactions.

Inductivism is the traditional and still commonplace philosophy of scientific method to develop scientific theories. Inductivism aims to neutrally observe a domain, infer laws from examined cases—hence, inductive reasoning—and thus objectively discover the sole naturally true theory of the observed.

Cooperative bargaining is a process in which two people decide how to share a surplus that they can jointly generate. In many cases, the surplus created by the two players can be shared in many ways, forcing the players to negotiate which division of payoffs to choose. Such surplus-sharing problems are faced by management and labor in the division of a firm's profit, by trade partners in the specification of the terms of trade, and more.

Hypothesis Proposed explanation for an observation, phenomenon, or scientific problem

A hypothesis is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the scientific method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained with the available scientific theories. Even though the words "hypothesis" and "theory" are often used interchangeably, a scientific hypothesis is not the same as a scientific theory. A working hypothesis is a provisionally accepted hypothesis proposed for further research in a process beginning with an educated guess or thought.

Explanatory power is the ability of a hypothesis or theory to explain the subject matter effectively to which it pertains. Its opposite is explanatory impotence.

Bold hypothesis or bold conjecture is a concept in the philosophy of science of Karl Popper, first explained in his debut The Logic of Scientific Discovery (1935) and subsequently elaborated in writings such as Conjectures and Refutations: The Growth of Scientific Knowledge (1963). The concept is nowadays widely used in the philosophy of science and in the philosophy of knowledge. It is also used in the social and behavioural sciences.