This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
In science, experimenter's regress refers to a loop of dependence between theory and evidence. In order to judge whether a new piece of evidence is correct we rely on theory-based predictions, and to judge the value of competing theories we rely on existing evidence. Cognitive bias affects experiments, and experiments determine which theory is valid. This issue is particularly important in new fields of science where there is no consensus regarding the values of various competing theories, and where the extent of experimental errors is not well known.
If experimenter's regress acts a positive feedback system, it can be a source of pathological science. An experimenter's strong belief in a new theory produces confirmation bias, and any biased evidence they obtain then strengthens their belief in that particular theory. Neither individual researchers nor entire scientific communities are immune to this effect: see N-rays and polywater.
Experimenter's regress is a typical relativistic phenomenon in the Empirical Programme of Relativism (EPOR). EPOR is very much concerned with a focus on social interactions, by looking at particular (local) cases and controversial issues in the context in which they happen. In EPOR, all scientific knowledge is perceived to be socially constructed and is thus "not given by nature".
In his article Son of seven sexes: The Social Destruction of a Physical Phenomenon, Harry Collins argued that scientific experiments are subject to what he calls "experimenter's regress". [1] The outcome of a phenomenon that is studied for the first time is always uncertain and judgment in these situations, about what matters, requires considerable experience, tacit and practical knowledge. When a scientist runs an experiment, and the experiment yields a result, they can never be sure whether this is the result which they had expected. The result looks good because they know that their experimental protocol was correct; or the result looks wrong, and therefore there must be something wrong with their experimental protocol. The scientist, in other words, has to get the right answer in order to know that the experiment is working, or know that the experiment is working to get the right answer.
Experimenter's regress occurs at the "research frontier" where the outcome of research is uncertain, for the scientist is dealing with "novel phenomena". Collins puts it this way: "usually, successful practice of an experimental skill is evident in a successful outcome to an experiment, but where the detection of a novel phenomenon is in question, it is not clear what should count as a 'successful outcome' – detection or non detection of the phenomenon" (Collins 1981: 34). In new fields of research where no paradigm has yet evolved and where no consensus exists as what counts as proper research, experimenter's regress is a problem that often occurs. Also, in situations where there is much controversy over a discovery or claim due to opposing interests, dissenters will often question experimental evidence that founds a theory. [2]
Because, for Collins, all scientific knowledge is socially constructed, there are no purely cognitive reasons or objective criteria that determine whether a claim is valid or not. The regress must be broken by "social negotiation" between scientists in the respective field. In the case of Gravitational Radiation, Collins notices that Weber, the scientist who is said to have discovered the phenomenon, could refute all the critique and had "a technical answer for every other point" but he was not able to convince other scientists and in the end he was not taken seriously anymore. [2]
The problems that come with "experimenter's regress" can never be fully avoided because scientific outcomes in EPOR are seen as negotiable and socially constructed. Acceptance of claims boils down to persuasion of other people in the community. Experimenter's regress can always become a problem in a world where "the natural world in no way constrains what is believed to be". Moreover, it is difficult to falsify a claim by replicating an experiment; aside from the practical issues of time, money, access to facilities, etc., an experimental outcome may depend on precise conditions, or tacit knowledge (i.e. unarticulated knowledge) that was not included in the published experimental methods. Tacit knowledge can never be fully articulated or translated into a set of rules.
Some commentators have argued that Collins's "experimenter's regress" is foreshadowed by Sextus Empiricus' argument that "if we shall judge the intellects by the senses, and the senses by the intellect, this involves circular reasoning inasmuch as it is required that the intellects should be judged first in order that the intellects may be tested [hence] we possess no means by which to judge objects" (quoted after Godin & Gingras 2002: 140). Others have extended Collins's argument to the cases of theoretical practice ("theoretician's regress"; Kennefick 2000) and computer simulation studies ("simulationist's regress"; Gelfert 2011; Tolk 2017).
Empirical research is research using empirical evidence. It is also a way of gaining knowledge by means of direct and indirect observation or experience. Empiricism values some research more than other kinds. Empirical evidence can be analyzed quantitatively or qualitatively. Quantifying the evidence or making sense of it in qualitative form, a researcher can answer empirical questions, which should be clearly defined and answerable with the evidence collected. Research design varies by field and by the question being investigated. Many researchers combine qualitative and quantitative forms of analysis to better answer questions that cannot be studied in laboratory settings, particularly in the social sciences and in education.
Pseudoscience consists of statements, beliefs, or practices that claim to be both scientific and factual but are incompatible with the scientific method. Pseudoscience is often characterized by contradictory, exaggerated or unfalsifiable claims; reliance on confirmation bias rather than rigorous attempts at refutation; lack of openness to evaluation by other experts; absence of systematic practices when developing hypotheses; and continued adherence long after the pseudoscientific hypotheses have been experimentally discredited. It is not the same as junk science.
The scientific method is an empirical method for acquiring knowledge that has been referred to while doing science since at least the 17th century. The scientific method involves careful observation coupled with rigorous scepticism, because cognitive assumptions can distort the interpretation of the observation. Scientific inquiry includes creating a testable hypothesis through inductive reasoning, testing it through experiments and statistical analysis, and adjusting or discarding the hypothesis based on the results.
Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values. People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs.
An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in goal and scale but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies.
Selection bias is the bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population intended to be analyzed. It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may be false.
Hindsight bias, also known as the knew-it-all-along phenomenon or creeping determinism, is the common tendency for people to perceive past events as having been more predictable than they were.
The sociology of scientific knowledge (SSK) is the study of science as a social activity, especially dealing with "the social conditions and effects of science, and with the social structures and processes of scientific activity." The sociology of scientific ignorance (SSI) is complementary to the sociology of scientific knowledge. For comparison, the sociology of knowledge studies the impact of human knowledge and the prevailing ideas on societies and relations between knowledge and the social context within which it arises.
Harry Collins, FLSW, is a British sociologist of science at the School of Social Sciences, Cardiff University, Wales. In 2012 he was elected a Fellow of the British Academy. In 2013, he was elected a Fellow of the Learned Society of Wales.
Enculturation is the process by which people learn the dynamics of their surrounding culture and acquire values and norms appropriate or necessary to that culture and its worldviews.
Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science, and more specifically the Computer Sciences, which uses advanced computing capabilities to understand and solve complex physical problems. While this discussion typically extenuates into Visual Computation, this research field of study will typically include the following research categorizations.
Internal validity is the extent to which a piece of evidence supports a claim about cause and effect, within the context of a particular study. It is one of the most important properties of scientific studies and is an important concept in reasoning about evidence more generally. Internal validity is determined by how well a study can rule out alternative explanations for its findings. It contrasts with external validity, the extent to which results can justify conclusions about other contexts. Both internal and external validity can be described using qualitative or quantitative forms of causal notation.
Scientific modelling is an activity that produces models representing empirical objects, phenomena, and physical processes, to make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate. It requires selecting and identifying relevant aspects of a situation in the real world and then developing a model to replicate a system with those features. Different types of models may be used for different purposes, such as conceptual models to better understand, operational models to operationalize, mathematical models to quantify, computational models to simulate, and graphical models to visualize the subject.
Feminist epistemology is an examination of epistemology from a feminist standpoint.
Evidence for a proposition is what supports the proposition. It is usually understood as an indication that the supported proposition is true. What role evidence plays and how it is conceived varies from field to field.
Criticism of science addresses problems within science in order to improve science as a whole and its role in society. Criticisms come from philosophy, from social movements like feminism, and from within science itself.
The decline effect may occur when scientific claims receive decreasing support over time. The term was first described by parapsychologist Joseph Banks Rhine in the 1930s to describe the disappearing of extrasensory perception (ESP) of psychic experiments conducted by Rhine over the course of study or time. In its more general term, Cronbach, in his review article of science "Beyond the two disciplines of scientific psychology" referred to the phenomenon as "generalizations decay." The term was once again used in a 2010 article by Jonah Lehrer published in The New Yorker.
Causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference of association is that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed. The study of why things occur is called etiology, and can be described using the language of scientific causal notation. Causal inference is said to provide the evidence of causality theorized by causal reasoning.