This article may be confusing or unclear to readers.(January 2018) |
An operational definition specifies concrete, replicable procedures designed to represent a construct. In the words of American psychologist S.S. Stevens (1935), "An operation is the performance which we execute in order to make known a concept." [1] [2] For example, an operational definition of "fear" (the construct) often includes measurable physiologic responses that occur in response to a perceived threat. Thus, "fear" might be operationally defined as specified changes in heart rate, galvanic skin response, pupil dilation, and blood pressure. [3]
An operational definition is designed to model or represent a concept or theoretical definition, also known as a construct. Scientists should describe the operations (procedures, actions, or processes) that define the concept with enough specificity such that other investigators can replicate their research. [4] [5]
Operational definitions are also used to define system states in terms of a specific, publicly accessible process of preparation or validation testing. [6] For example, 100 degrees Celsius may be operationally defined as the process of heating water at sea level until it is observed to boil.
A cake can be operationally defined by a cake recipe. [7]
Despite the controversial philosophical origins of the concept, particularly its close association with logical positivism, operational definitions have undisputed practical applications. This is especially so in the social and medical sciences, where operational definitions of key terms are used to preserve the unambiguous empirical testability of hypothesis and theory. Operational definitions are also important in the physical sciences.
The Stanford Encyclopedia of Philosophy entry on scientific realism, written by Richard Boyd, indicates that the modern concept owes its origin in part to Percy Williams Bridgman, who felt that the expression of scientific concepts was often abstract and unclear. Inspired by Ernst Mach, in 1914 Bridgman attempted to redefine unobservable entities concretely in terms of the physical and mental operations used to measure them. [8] Accordingly, the definition of each unobservable entity was uniquely identified with the instrumentation used to define it. From the beginning objections were raised to this approach, in large part around the inflexibility. As Boyd notes, "In actual, and apparently reliable, scientific practice, changes in the instrumentation associated with theoretical terms are routine. and apparently crucial to the progress of science. According to a 'pure' operationalist conception, these sorts of modifications would not be methodologically acceptable, since each definition must be considered to identify a unique 'object' (or class of objects)." [8] However, this rejection of operationalism as a general project destined ultimately to define all experiential phenomena uniquely did not mean that operational definitions ceased to have any practical use or that they could not be applied in particular cases.[ citation needed ]
The special theory of relativity can be viewed as the introduction of operational definitions for simultaneity of events and of distance, that is, as providing the operations needed to define these terms. [9]
In quantum mechanics the notion of operational definitions is closely related to the idea of observables, that is, definitions based upon what can be measured. [10] [11]
Operational definitions are often most challenging in the fields of psychology and psychiatry, where intuitive concepts, such as intelligence need to be operationally defined before they become amenable to scientific investigation, for example, through processes such as IQ tests.
On October 15, 1970, the West Gate Bridge in Melbourne, Australia collapsed, killing 35 construction workers. The subsequent enquiry found that the failure arose because engineers had specified the supply of a quantity of flat steel plate. The word flat in this context lacked an operational definition, so there was no test for accepting or rejecting a particular shipment or for controlling quality.
In his managerial and statistical writings, W. Edwards Deming placed great importance on the value of using operational definitions in all agreements in business. As he said:
Operational, in a process context, also can denote a working method or a philosophy that focuses principally on cause and effect relationships (or stimulus/response, behavior, etc.) of specific interest to a particular domain at a particular point in time. As a working method, it does not consider issues related to a domain that are more general, such as the ontological, etc.
This article may need to be rewritten to comply with Wikipedia's quality standards, as section.(January 2022) |
This section's tone or style may not reflect the encyclopedic tone used on Wikipedia.(January 2022) |
Science uses computing. Computing uses science. We have seen the development of computer science. There are not many who can bridge all three of these. One effect is that, when results are obtained using a computer, the results can be impossible to replicate if the code is poorly documented, contains errors, or if parts are omitted entirely. [12]
Many times, issues are related to persistence and clarity of use of variables, functions, and so forth. Also, systems dependence is an issue. In brief, length (as a standard) has matter as its definitional basis. What pray tell can be used when standards are to be computationally framed?
Hence, operational definition can be used within the realm of the interactions of humans with advanced computational systems. In this sense, one area of discourse deals with computational thinking in, and with how it might influence, the sciences. [13] To quote the American Scientist:
One referenced project pulled together fluid experts, including some who were expert in the numeric modeling related to computational fluid dynamics, in a team with computer scientists. Essentially, it turned out that the computer guys did not know enough to weigh in as much as they would have liked. Thus, their role, to their chagrin, many times was "mere" programmer.
Some knowledge-based engineering projects experienced similarly that there is a trade-off between trying to teach programming to a domain expert versus getting a programmer to understand the intricacies of a domain. That, of course, depends upon the domain. In short, any team member has to decide which side of the coin to spend one's time.
The International Society for Technology in Education has a brochure detailing an "operational definition" of computational thinking. At the same time, the ISTE made an attempt at defining related skills. [14]
A recognized skill is tolerance for ambiguity and being able to handle open-ended problems. For instance, a knowledge-based engineering system can enhance its operational aspect and thereby its stability through more involvement by the subject-matter expert, thereby opening up issues of limits that are related to being human. As in, many times, computational results have to be taken at face value due to several factors (hence the duck test's necessity arises) that even an expert cannot overcome. The end proof may be the final results (reasonable facsimile by simulation or artifact, working design, etc.) that are not guaranteed to be repeatable, may have been costly to attain (time and money), and so forth.
In advanced modeling, with the requisite computational support such as knowledge-based engineering, mappings must be maintained between a real-world object, its abstracted counterparts as defined by the domain and its experts, and the computer models. Mismatches between domain models and their computational mirrors can raise issues apropos this topic. Techniques that allow the flexible modeling required for many hard problems must resolve issues of identity, type, etc. which then lead to methods, such as duck typing. Many domains, with a numerical focus, use limit theory, of various sorts, to overcome the duck test necessity with varying degrees of success. Yet, with that, issues still remain as representational frameworks bear heavily on what we can know.
In arguing for an object-based methodology, Peter Wegner [15] suggested that "positivist scientific philosophies, such as operationalism in physics and behaviorism in psychology" were powerfully applied in the early part of the 20th century. However, computation has changed the landscape. He notes that we need to distinguish four levels of "irreversible physical and computational abstraction" (Platonic abstraction, computational approximation, functional abstraction, and value computation). Then, we must rely on interactive methods, that have behavior as their focus (see duck test).
The thermodynamic definition of temperature, due to Nicolas Léonard Sadi Carnot, refers to heat "flowing" between "infinite reservoirs". This is all highly abstract and unsuited for the day-to-day world of science and trade. In order to make the idea concrete, temperature is defined in terms of operations with the gas thermometer. However, these are sophisticated and delicate instruments, only adapted to the national standardization laboratory.
For day-to-day use, the International Temperature Scale of 1990 (ITS) is used, defining temperature in terms of characteristics of the several specific sensor types required to cover the full range. One such is the electrical resistance of a thermistor, with specified construction, calibrated against operationally defined fixed points.
Electric current is defined in terms of the force between two infinite parallel conductors, separated by a specified distance. This definition is too abstract for practical measurement, so a device known as a current balance is used to define the ampere operationally.
Unlike temperature and electric current, there is no abstract physical concept of the hardness of a material. It is a slightly vague, subjective idea, somewhat like the idea of intelligence. In fact, it leads to three more specific ideas:
Of these, indentation hardness itself leads to many operational definitions, the most important of which are:
In all these, a process is defined for loading the indenter, measuring the resulting indentation, and calculating a hardness number. Each of these three sequences of measurement operations produces numbers that are consistent with our subjective idea of hardness. The harder the material to our informal perception, the greater the number it will achieve on our respective hardness scales. Furthermore, experimental results obtained using these measurement methods has shown that the hardness number can be used to predict the stress required to permanently deform steel, a characteristic that fits in well with our idea of resistance to permanent deformation. However, there is not always a simple relationship between the various hardness scales. Vickers and Rockwell hardness numbers exhibit qualitatively different behaviour when used to describe some materials and phenomena.
The constellation Virgo is a specific constellation of stars in the sky, hence the process of forming Virgo cannot be an operational definition, since it is historical and not repeatable. Nevertheless, the process whereby we locate Virgo in the sky is repeatable, so in this way, Virgo is operationally defined. In fact, Virgo can have any number of definitions (although we can never prove that we are talking about the same Virgo), and any number may be operational.
New academic disciplines appear in response to interdisciplinary activity at universities. An academic suggested that a subject matter area becomes a discipline when there are more than a dozen university departments using the same name for roughly the same subject matter. [16]
Theoretical definition | Operational definition |
Weight: a measurement of gravitational force acting on an object | a result of measurement of an object on a newton spring scale |
Cognitive science is the interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."
In computability theory, the Church–Turing thesis is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:
Educational psychology is the branch of psychology concerned with the scientific study of human learning. The study of learning processes, from both cognitive and behavioral perspectives, allows researchers to understand individual differences in intelligence, cognitive development, affect, motivation, self-regulation, and self-concept, as well as their role in learning. The field of educational psychology relies heavily on quantitative methods, including testing and measurement, to enhance educational activities related to instructional design, classroom management, and assessment, which serve to facilitate learning processes in various educational settings across the lifespan.
Herbert Alexander Simon was an American scholar whose work also influenced the fields of computer science, economics, and cognitive psychology. His primary research interest was decision-making within organizations and he is best known for the theories of "bounded rationality" and "satisficing". He received the Turing Award in 1975 and the Nobel Memorial Prize in Economic Sciences in 1978. His research was noted for its interdisciplinary nature, spanning the fields of cognitive science, computer science, public administration, management, and political science. He was at Carnegie Mellon University for most of his career, from 1949 to 2001, where he helped found the Carnegie Mellon School of Computer Science, one of the first such departments in the world.
Measurement is the quantification of attributes of an object or event, which can be used to compare with other objects or events. In other words, measurement is a process of determining how large or small a physical quantity is as compared to a basic reference quantity of the same kind. The scope and application of measurement are dependent on the context and discipline. In natural sciences and engineering, measurements do not apply to nominal properties of objects or events, which is consistent with the guidelines of the International vocabulary of metrology published by the International Bureau of Weights and Measures. However, in other fields such as statistics as well as the social and behavioural sciences, measurements can have multiple levels, which would include nominal, ordinal, interval and ratio scales.
Psychometrics is a field of study within psychology concerned with the theory and technique of measurement. Psychometrics generally covers specialized fields within psychology and education devoted to testing, measurement, assessment, and related activities. Psychometrics is concerned with the objective measurement of latent constructs that cannot be directly observed. Examples of latent constructs include intelligence, introversion, mental disorders, and educational achievement. The levels of individuals on nonobservable latent variables are inferred through mathematical modeling based on what is observed from individuals' responses to items on tests and scales.
In cognitive psychology, information processing is an approach to the goal of understanding human thinking that treats cognition as essentially computational in nature, with the mind being the software and the brain being the hardware. It arose in the 1940s and 1950s, after World War II. The information processing approach in psychology is closely allied to the computational theory of mind in philosophy; it is also related to cognitivism in psychology and functionalism in philosophy.
Experimental psychology refers to work done by those who apply experimental methods to psychological study and the underlying processes. Experimental psychologists employ human participants and animal subjects to study a great many topics, including sensation, perception, memory, cognition, learning, motivation, emotion; developmental processes, social psychology, and the neural substrates of all of these.
In research design, especially in psychology, social sciences, life sciences and physics, operationalization or operationalisation is a process of defining the measurement of a phenomenon which is not directly measurable, though its existence is inferred from other phenomena. Operationalization thus defines a fuzzy concept so as to make it clearly distinguishable, measurable, and understandable by empirical observation. In a broader sense, it defines the extension of a concept—describing what is and is not an instance of that concept. For example, in medicine, the phenomenon of health might be operationalized by one or more indicators like body mass index or tobacco smoking. As another example, in visual processing the presence of a certain object in the environment could be inferred by measuring specific features of the light it reflects. In these examples, the phenomena are difficult to directly observe and measure because they are general/abstract or they are latent. Operationalization helps infer the existence, and some elements of the extension, of the phenomena of interest by means of some observable and measurable effects they have.
A theoretical definition defines a term in an academic discipline, functioning as a proposal to see a phenomenon in a certain way. A theoretical definition is a proposed way of thinking about potentially related events. Theoretical definitions contain built-in theories; they cannot be simply reduced to describing a set of observations. The definition may contain implicit inductions and deductive consequences that are part of the theory. A theoretical definition of a term can change, over time, based on the methods in the field that created it.
Level of measurement or scale of measure is a classification that describes the nature of information within the values assigned to variables. Psychologist Stanley Smith Stevens developed the best-known classification with four levels, or scales, of measurement: nominal, ordinal, interval, and ratio. This framework of distinguishing levels of measurement originated in psychology and has since had a complex history, being adopted and extended in some disciplines and by some scholars, and criticized or rejected by others. Other classifications include those by Mosteller and Tukey, and by Chrisman.
Neurophilosophy or the philosophy of neuroscience is the interdisciplinary study of neuroscience and philosophy that explores the relevance of neuroscientific studies to the arguments traditionally categorized as philosophy of mind. The philosophy of neuroscience attempts to clarify neuroscientific methods and results using the conceptual rigor and methods of philosophy of science.
The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.
Scientific modelling is an activity that produces models representing empirical objects, phenomena, and physical processes, to make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate. It requires selecting and identifying relevant aspects of a situation in the real world and then developing a model to replicate a system with those features. Different types of models may be used for different purposes, such as conceptual models to better understand, operational models to operationalize, mathematical models to quantify, computational models to simulate, and graphical models to visualize the subject.
In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. It is closely related to functionalism, a broader theory that defines mental states by what they do rather than what they're made of.
Concept learning, also known as category learning, concept attainment, and concept formation, is defined by Bruner, Goodnow, & Austin (1956) as "the search for and testing of attributes that can be used to distinguish exemplars from non exemplars of various categories". More simply put, concepts are the mental categories that help us classify objects, events, or ideas, building on the understanding that each object, event, or idea has a set of common relevant features. Thus, concept learning is a strategy which requires a learner to compare and contrast groups or categories that contain concept-relevant features with groups or categories that do not contain concept-relevant features.
A system is a group of interacting or interrelated elements that act according to a set of rules to form a unified whole. A system, surrounded and influenced by its environment, is described by its boundaries, structure and purpose and is expressed in its functioning. Systems are the subjects of study of systems theory and other systems sciences.
Facet theory is a metatheory for the multivariate behavioral sciences that posits that scientific theories and measurements can be advanced by discovering relationships between conceptual classifications of research variables and empirical partitions of data-representation spaces. For this purpose, facet theory proposes procedures for (1) Constructing or selecting variables for observation, using the mapping sentence technique, and (2) Analyzing multivariate data, using data representation spaces, notably those depicting similarity measures, or partially ordered sets, derived from the data.
In philosophy, a construct is an object which is ideal, that is, an object of the mind or of thought, meaning that its existence may be said to depend upon a subject's mind. This contrasts with any possibly mind-independent objects, the existence of which purportedly does not depend on the existence of a conscious observing subject. Thus, the distinction between these two terms may be compared to that between phenomenon and noumenon in other philosophical contexts and to many of the typical definitions of the terms realism and idealism also. In the correspondence theory of truth, ideas, such as constructs, are to be judged and checked according to how well they correspond with their referents, often conceived as part of a mind-independent reality.
This glossary of computer science is a list of definitions of terms and concepts used in computer science, its sub-disciplines, and related fields, including terms relevant to software, data science, and computer programming.
Electrical stimulation of the amygdala elicits many of the behaviors used to define a state of fear, with selected target areas of the amygdala producing specific effects (Fig. 1).