Counterinduction

Last updated

In logic, counterinduction is the practice of elaborating a paradigm that contradicts and helps to question the current one by comparison. Paul Feyerabend argued for counterinduction as a way to test unchallenged scientific theories; unchallenged simply because there are no structures within the scientific paradigm to challenge itself (See Crotty, 1998 p. 39). For instance, Feyerabend is quoted as saying the following:

"Therefore, the first step in our criticism of customary concepts and customary reactions is to step outside the circle and either to invent a new conceptual system, for example, a new theory, that clashes with the most carefully established observational results and confounds the most plausible theoretical principles, or to import such a system from the outside science, from religion, from mythology, from the ideas of incompetents, or the ramblings of madmen." (Feyerabend, 1993, pp. 52-3)

This gets into the pluralistic methodology that Feyerabend espouses that will help support counterinductive methods. Paul Feyerabend's anarchist theory popularized the notion of counterinduction.

Most of the time when counterinduction is mentioned, it is not presented as a valid rule. Instead, it is given as a refutation of Max Black's proposed inductive justification of induction, since the counterinductive justification of counterinduction[ jargon ] is formally identical to the inductive justification of induction. [1] For further information, see Problem of induction.

See also

Related Research Articles

<span class="mw-page-title-main">Empiricism</span> Idea that knowledge comes only/mainly from sensory experience

In philosophy, empiricism is an epistemological view that holds that true knowledge or justification comes only or primarily from sensory experience. It is one of several competing views within epistemology, along with rationalism and skepticism. Empiricism emphasizes the central role of empirical evidence in the formation of ideas, rather than innate ideas or traditions. However, empiricists may argue that traditions arise due to relations of previous sensory experiences.

<span class="mw-page-title-main">Falsifiability</span> Property of a statement that can be logically contradicted

Falsifiability is a deductive standard of evaluation of scientific theories and hypotheses, introduced by the philosopher of science Karl Popper in his book The Logic of Scientific Discovery (1934). A theory or hypothesis is falsifiable if it can be logically contradicted by an empirical test.

<span class="mw-page-title-main">Scientific method</span> Interplay between observation, experiment and theory in science

The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th century It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; the testability of hypotheses, experimental and the measurement-based statistical testing of deductions drawn from the hypotheses; and refinement of the hypotheses based on the experimental findings. These are principles of the scientific method, as distinguished from a definitive series of steps applicable to all scientific enterprises.

<span class="mw-page-title-main">Statistical inference</span> Process of using data analysis

Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.

<span class="mw-page-title-main">Philosophy of science</span> Study of foundations, methods, and implications of science

Philosophy of science is a branch of philosophy concerned with the foundations, methods, and implications of science. The central questions of this study concern what qualifies as science, the reliability of scientific theories, and the ultimate purpose of science. This discipline overlaps with metaphysics, ontology, and epistemology, for example, when it explores the relationship between science and truth. Philosophy of science focuses on metaphysical, epistemic and semantic aspects of science. Ethical issues such as bioethics and scientific misconduct are often considered ethics or science studies rather than the philosophy of science.

<span class="mw-page-title-main">Problem of induction</span> Question of whether inductive reasoning leads to definitive knowledge

First formulated by David Hume, the problem of induction questions our reasons for believing that the future will resemble the past, or more broadly it questions predictions about unobserved things based on previous observations. This inference from the observed to the unobserved is known as "inductive inferences", and Hume, while acknowledging that everyone does and must make such inferences, argued that there is no non-circular way to justify them, thereby undermining one of the Enlightenment pillars of rationality.

Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle. Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular evidence to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, contradistinguishing abduction from induction.

The following outline is provided as an overview of and topical guide to the scientific method:

Inductive reasoning is a method of reasoning in which a general principle is derived from a body of observations. It consists of making broad generalizations based on specific observations. Inductive reasoning is distinct from deductive reasoning, where the conclusion of a deductive argument is certain given the premises are correct; in contrast, the truth of the conclusion of an inductive argument is probable, based upon the evidence given.

Ray Solomonoff was the inventor of algorithmic probability, his General Theory of Inductive Inference, and was a founder of algorithmic information theory. He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.

Solomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.

In statistics, classification is the problem of identifying which of a set of categories (sub-populations) an observation belongs to. Examples are assigning a given email to the "spam" or "non-spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient.

Computational epistemology is a subdiscipline of formal epistemology that studies the intrinsic complexity of inductive problems for ideal and computationally bounded agents. In short, computational epistemology is to induction what recursion theory is to deduction. It has been applied to problems in philosophy of science.

Formal epistemology uses formal methods from decision theory, logic, probability theory and computability theory to model and reason about issues of epistemological interest. Work in this area spans several academic fields, including philosophy, computer science, economics, and statistics. The focus of formal epistemology has tended to differ somewhat from that of traditional epistemology, with topics like uncertainty, induction, and belief revision garnering more attention than the analysis of knowledge, skepticism, and issues with justification.

<span class="mw-page-title-main">Münchhausen trilemma</span> A thought experiment used to demonstrate the impossibility of proving any truth

In epistemology, the Münchhausen trilemma is a thought experiment intended to demonstrate the theoretical impossibility of proving any truth, even in the fields of logic and mathematics, without appealing to accepted assumptions. If it is asked how any given proposition is known to be true, proof in support of that proposition may be provided. Yet that same question can be asked of that supporting proof, and any subsequent supporting proof. The Münchhausen trilemma is that there are only three ways of completing a proof:

Inductivism is the traditional and still commonplace philosophy of scientific method to develop scientific theories. Inductivism aims to neutrally observe a domain, infer laws from examined cases—hence, inductive reasoning—and thus objectively discover the sole naturally true theory of the observed.

<span class="mw-page-title-main">Outline of thought</span> Overview of and topical guide to thought

The following outline is provided as an overview of and topical guide to thought (thinking):

Statistics concerns the collection, organization, analysis, interpretation, and presentation of data, used to solve practical problems and draw conclusions.

Inductive programming (IP) is a special area of automatic programming, covering research from artificial intelligence and programming, which addresses learning of typically declarative and often recursive programs from incomplete specifications, such as input/output examples or constraints.

References

  1. The Problem of Induction Archived March 13, 2007, at the Wayback Machine