Experimental political science is the use of experiments, which may be natural or controlled, to implement the scientific method in political science.
In the 1909 American Political Science Association presidential address, A. Lawrence Lowell claimed: “We are limited by the impossibility of experiment. Politics is an observational, not an experimental science….” [1] He argued that political science, as an emerging discipline, did not need to follow the experimental-led approach of the natural sciences. [2] In the 1900s, observational research was the only way of doing research in political science.
The first experiment in political science is regarded to be Harold Gosnell's 1924 study on voter turnout in Chicago. [3] In this experiment, he randomly assigned districts to receive information on voter registration and encouragement to vote. [4]
In the 1950s, the behaviorist revolution was in full swing, and the development of experimental politics ushered in the first watershed. [5]
The first experiment published in the American Political Science Review was in 1956, 50 years after the publication's inception. [6] The American Political Science Review covered all fields of political science and was found by the Cambridge University Press in 1906. Another mainstream political science journal, the Journal of Conflict Resolution, also began publishing articles on experimental research during this period.
In the 1970s, with the rise of political psychology, the rejection of experimental politics softened. [7]
In the 1980s, computer-assisted telephone interviewing began to appear and was used for the collection of experimental data—the advances in the technology led to the initial rise of experiments. [8]
The period before 2000 was classified as the prelude to the experimental era. [8] This is the long incubation stage of experimental politics. During this period, experimentation was not the main academic activity of political science. Many great discoveries come from the integration of scholars with using multiple methods.
From 2001 to 2009, this period was the first generation when experiments were widely used. [8] Experimentation was becoming part of the political scientist's toolkit during this period due to the development of information technology.
From 2010 till now, we are in the new era of experimental political science 2.0. [8]
Due to the development of the internet, the emergence of commercial internet survey panels and crowdsourcing platforms make data much cheaper and easier to obtain than ever before. [9] Abundant and accessible information is the basis for the wider implementation of experiments in political science.
In 2010, the Experimental Research section of the American Political Science Association held its first conference. [10]
In 2012, Bond et al.'s experiment delivered political mobilization messages to 61 million Facebook users. They aimed to explore whether an “I Voted” widget that announced one's election participation to others increased turnout among Facebook users and their friends. [11] Today's mature social media provide the opportunity to intervene experimentally with a vast capacity population.
In 2014, the first issue of the Journal of Experimental Political Science was published by the Experimental Research section of the American Political Science Association.
From 2010 to 2019, there were 75 articles using experimental methods published in American Political Science Review. The figure was 76 from 1950 to 2009 in this journal. [12]
Current experts in experimental methodology in political science include Rebecca Morton and Donald Green.
Among the areas that it is used in are:
Social scientists, including political scientists, have long used validity to measure whether a particular analytical method can provide credible evidence for validating theoretical inferences. Donald Campbel defines validity as the validity of an empirical research design or method as the degree of approximation between the knowledge inference based on the design or method and the actual situation. [16] Alternatively, validity is the extent to which we can believe that the empirical inference can reflect the real laws of human society. [16] This definition has been recognized by most scholars.
Validity can be further divided into internal validity and external validity. Campbell further refined the internal validity into three parts: constructive validity, causal validity, and statistical validity. [17]
Classification of validity aims to facilitate researchers to describe and conduct research from different aspects. [18] Validity itself is a holistic concept. Any single class cannot exist in isolation from other validity. [18] For example, high construct validity means that the research design has a higher degree of fit with the theory being studied. This makes the relationship of causal variables more stable from the statistical perspective, which supports the improvement of statistical validity. [19]
Internal validity is a prerequisite for external validity. If a reasonable estimate of the target group is not made, there is no point in extending relevant estimates to groups outside the target group. [17]
Internal validity refers to the degree to which knowledge inferences based on empirical research approximate the actual attitudes or behavioural patterns of the target population. [17]
Construct validity involves the generality of empirical inferences and aims to evaluate whether a research design is a reasonable and targeted assessment of the target theory. [17]
Causal validity is similar to the "identification problem" in economics, which examines whether an empirical design can effectively eliminate interference factors and provide accurate evidence for determining causal effects or mechanisms. [17]
Statistical validity refers to whether there is a significant and stable statistical relationship between the core causal factors of the study at the empirical level. The most common test of statistical validity is repeated testing of the same sample of the target population. [17]
External validity refers to how empirical inferences apply to populations other than the target population. [17]
Laboratory experiments place subjects in specific environments and examine how individuals make specific political decisions (such as voting, jury trials, and legislation). [20] Laboratory experiments have stricter control over the experimental site and time, and the entire experimental process must be completed under the supervision and guidance of the researcher. [21] Questionnaires are usually used to collect the subjects' personal information and experimental results. [21]
Laboratory experiments are usually carried out in independent laboratories, reflecting the researcher's understanding and attitude towards the information contained in time and space. [22] Laboratory experiments emphasize the control environment and other non-experimental elements to maximize the exclusion of interference factors and accurately measure the causal effects of researched factors. [21] The laboratory itself does not refer to the specialized laboratory used in the natural sciences. Classrooms, activity rooms, or other independent spaces can be used as experimental places.
The conjoint survey experiment is a method for examining multidimensional preferences. [23]
At one extreme, "pure" laboratory experiments are carried out in environments that researchers highly control. Students are often a convenient sample in lab experiments because they are easy to participate in and able to follow the experimental instructions reliably. Natural experiments are the kind of design at another extreme. In natural experiments, participants are in the daily-live conditions, and they do not know they are being observed.
Lab-in-the-field experimental research is located in a continuous spectrum between the "pure" lab experiments and natural experiments. [24] It is conducted in a diversified environment with various types of subjects. The following four types of research experiments are distinguished according to the type of research problem they solve.
The hypothesis may have to be tested by using specific populations. [25] Or the researcher hopes to test if a result could be generalized to a broader or a representative, population. [25]
Lab procedures can be put into use in the measurement of the field. Risk aversion, time preference (patience), altruism, cooperation, competition, and in-group discrimination are the most common measures. [25]
This kind of lab-in-the-field experiment is for the circumstance that participants who have already been treated are being recruited. [25] The experimenters are recruiting rather than implementing treatments.
The purpose of the experimenter may be to teach the target population about the games by using a lab-in-the-field approach. [26] Through this process, the experimenter could target a particular policy problem. The game could be the treatment in this circumstance.
Audit studies are often used for measuring bias or discrimination. [26] Audit studies are part of the construction of a giant field experiment. This field experiment aims to measure the behaviour of participants in the field rather than to change it. [26] In most cases, researchers use the ways like sending messages to measure participants' behaviours unobtrusively.
Experimental research methods in political science have unavoidable ethical questions: Can humans be used for experiments? [27] While experiments in political science usually do not cause physical harm, the elements of deception is commonly existing. [27]
When conducting political science experiments, researchers must intervene in the process of data generation. Political science, as social science, studies human behaviour. Then political science experiments will inevitably have an impact on people. [28] For example, subjects in experiments make choices they would not otherwise face, or subjects are put into experiences or controls that they would not otherwise have.
The experimental process itself is not the only way political science research activities affect human life. Other influences include the influence of the dissemination of experimental results, the influence of political science scholars on their students, and the influence of research results on institutions and professional organizations. Just like the ethics of most other professions, political scientists have ethics.
In 1967, a committee was created by the American Political Science Association (APSA), which aims to explore issues “relevant to the problems of maintaining a high sense of professional standards and responsibilities.” [28] Marver H. Bernstein served as the chairman of this committee. This committee created the first version of the written code of rules of professional conduct. [28]
In 1968, Standing Committee on Professional Ethics was founded. Their job mainly includes the review of formal grievances, mediation and intermediation to other organizations, and issuing formal advisory opinions. [28]
In 1989 and 2008, the code of rules of professional conduct was revised. [28]
After the 1970s, most political scholars began to use experimental methods to focus their research on political behaviour, public opinion, and mass communication. The classic topics include exploring the behavioural preferences and choices of different social groups in group actions, the influence of campaign propaganda on voting results in the voting process, the influence of media propaganda on public attitudes, and the influence of personality on political participation.
Since the beginning of the 21st century, political science research has clearly shown a trend from correlation research to causality research. [29] Political scientists are increasingly dissatisfied with just confirming the strength of the relationship between various factors, and gradually devote themselves to the discussion of causal effects and mechanisms among variables. [30] To measure causal effects accurately is a significant problem and challenge in the social sciences. The means to keep inferences from selection bias in observational studies are minimal. In experimental research, randomly assigning the observation objects to the experimental group and the control group can ensure that there are no observable or unobservable differences between the research objects in theory before the practical operation. [30] The objectivity of the conclusion is built on randomization in this way.
The inherent flaws of various non-experimental study methods have been continuously exposed in the past 20 years. The most notable drawback among all is the limitation of observable data, which hinders researchers' further exploration of causality. [31] In a modern society with highly developed information technology, it is becoming more and more efficient and low-cost to control and collect data by conducting experiments.
The emergence of new communication methods and new media is both a challenge and an opportunity for social science research. These social media and online blogs transmit a large amount of political information and make it more accessible to the public. In the past, the information for social science research was generally insufficient and deficient. Now it is necessary to face the problem of screening the validity of information in the era of information explosion. [32]
Current political scientists are more and more inclined to explore political behaviour's psychological basis and attitude characteristics through experimental means. The distinctive feature of current research is that it focuses on some specific social groups, such as unmarried people, students, ethnic minorities, etc. [33] Investigations about the general population are insufficient.
Different research methods have distinct advantages and disadvantages. The disadvantages of one method cannot be compared with the advantages of another method. More and more scholars have begun to use a mixture of different methods to make up for the shortcomings of a single experimental approach. [33] In the field of social sciences, experimental research methods need to be combined with other research methods to obtain knowledge innovation in both theory and methodology and promote the development of the discipline. [32]
Political science is the scientific study of politics. It is a social science dealing with systems of governance and power, and the analysis of political activities, political thought, political behavior, and associated constitutions and laws.
An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in goal and scale but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies.
A case study is an in-depth, detailed examination of a particular case within a real-world context. For example, case studies in medicine may focus on an individual patient or ailment; case studies in business might cover a particular firm's strategy or a broader market; similarly, case studies in politics can range from a narrow happening over time like the operations of a specific political campaign, to an enormous undertaking like world war, or more often the policy analysis of real-world problems affecting multiple stakeholders.
Field experiments are experiments carried out outside of laboratory settings.
Comparative politics is a field in political science characterized either by the use of the comparative method or other empirical methods to explore politics both within and between countries. Substantively, this can include questions relating to political institutions, political behavior, conflict, and the causes and consequences of economic development. When applied to specific fields of study, comparative politics may be referred to by other names, such as comparative government.
Internal validity is the extent to which a piece of evidence supports a claim about cause and effect, within the context of a particular study. It is one of the most important properties of scientific studies and is an important concept in reasoning about evidence more generally. Internal validity is determined by how well a study can rule out alternative explanations for its findings. It contrasts with external validity, the extent to which results can justify conclusions about other contexts. Both internal and external validity can be described using qualitative or quantitative forms of causal notation.
External validity is the validity of applying the conclusions of a scientific study outside the context of that study. In other words, it is the extent to which the results of a study can generalize or transport to other situations, people, stimuli, and times. Generalizability refers to the applicability of a predefined sample to a broader population while transportability refers to the applicability of one sample to another target population. In contrast, internal validity is the validity of conclusions drawn within the context of a particular study.
In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations. The existence of confounders is an important quantitative explanation why correlation does not imply causation. Some notations are explicitly designed to identify the existence, possible existence, or non-existence of confounders in causal relationships between elements of a system.
A quasi-experiment is an empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment. Quasi-experimental research shares similarities with the traditional experimental design or randomized controlled trial, but it specifically lacks the element of random assignment to treatment or control. Instead, quasi-experimental designs typically allow the researcher to control the assignment to the treatment condition, but using some criterion other than random assignment.
Comparative historical research is a method of social science that examines historical events in order to create explanations that are valid beyond a particular time and place, either by direct comparison to other historical events, theory building, or reference to the present day. Generally, it involves comparisons of social processes across times and places. It overlaps with historical sociology. While the disciplines of history and sociology have always been connected, they have connected in different ways at different times. This form of research may use any of several theoretical orientations. It is distinguished by the types of questions it asks, not the theoretical framework it employs.
Causal reasoning is the process of identifying causality: the relationship between a cause and its effect. The study of causality extends from ancient philosophy to contemporary neuropsychology; assumptions about the nature of causality may be shown to be functions of a previous event preceding a later one. The first known protoscientific study of cause and effect occurred in Aristotle's Physics. Causal inference is an example of causal reasoning.
Arthur Lupia is an American political scientist. He is the Gerald R. Ford University Professor at the University of Michigan and Assistant Director of the National Science Foundation. Prior to joining NSF, he was Chairperson of the Board of the Center for Open Science and Chair of National Research Council's Roundtable on the Application of Behavioral and Social Science. His research concerns how information and institutions affect policy and politics, with a focus on how people make decisions when they lack information. He draws from multiple scientific and philosophical disciplines and uses multiple research methods. His topics of expertise include information processing, persuasion, strategic communication, and civic competence.
Causal analysis is the field of experimental design and statistics pertaining to establishing cause and effect. Typically it involves establishing four elements: correlation, sequence in time, a plausible physical or information-theoretical mechanism for an observed effect to follow from a possible cause, and eliminating the possibility of common and alternative ("special") causes. Such analysis usually involves one or more artificial or natural experiments.
Causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference of association is that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed. The study of why things occur is called etiology, and can be described using the language of scientific causal notation. Causal inference is said to provide the evidence of causality theorized by causal reasoning.
In statistics, econometrics, epidemiology, genetics and related disciplines, causal graphs are probabilistic graphical models used to encode assumptions about the data-generating process.
Tali Mendelberg is the John Work Garrett Professor in Politics at Princeton University, co-director of the Center for the Study of Democratic Politics, and director of the Program on Inequality at the Mamdouha S. Bobst Center for Peace and Justice, and winner of the American Political Science Association (APSA), 2002 Woodrow Wilson Foundation Book Award for her book, The Race Card: Campaign Strategy, Implicit Messages, and the Norm of Equality.
Rebecca Morton was an American political scientist. She was Professor of Political Science at New York University New York and New York University Abu Dhabi.
Causal analysis is the field of experimental design and statistical analysis pertaining to establishing cause and effect. Exploratory causal analysis (ECA), also known as data causality or causal discovery is the use of statistical algorithms to infer associations in observed data sets that are potentially causal under strict assumptions. ECA is a type of causal inference distinct from causal modeling and treatment effects in randomized controlled trials. It is exploratory research usually preceding more formal causal research in the same way exploratory data analysis often precedes statistical hypothesis testing in data analysis
James H. Kuklinski is an American political scientist.
James N. Druckman is an American political scientist who is a professor at the University of Rochester and was elected to the American Academy of Arts and Sciences in 2012.
{{cite journal}}
: CS1 maint: DOI inactive as of August 2024 (link){{cite book}}
: CS1 maint: location missing publisher (link) CS1 maint: others (link){{cite journal}}
: CS1 maint: DOI inactive as of August 2024 (link){{cite journal}}
: Cite journal requires |journal=
(help)