Hawthorne effect

Last updated

The Hawthorne effect refers to a type of reactivity in which individuals modify an aspect of their behavior in response to their awareness of being observed. [1] [2] Descriptions of the well-known and remarkable effect, which was discovered in the context of research conducted at the Hawthorne Western Electric plant, turned out to be fictional. [3]


The original research involved workers who made electrical relays at the Hawthorne Works, a Western Electric plant in Cicero, Illinois. Between 1924 and 1927 the famous lighting study was conducted. Workers experienced a series of lighting changes in which productivity was said to increase with almost any change in the lighting. This turned out not to be true. [3] In the study that was associated with Elton Mayo, which ran from 1928 to 1932, a series of changes in work structure were implemented (e.g., changes in rest periods) in a group of five women. But this was a methodologically poor, uncontrolled study that did not permit any firm conclusions to be drawn. [4]

Later interpretations such as Landsberger's [5] suggested that the novelty of being research subjects and the increased attention from such could lead to temporary increases in workers' productivity. This interpretation was dubbed "the Hawthorne effect," although the data does not support that view.


Aerial view of the Hawthorne Works, ca. 1925 Hawthorne, Illinois Works of the Western Electric Company, 1925.jpg
Aerial view of the Hawthorne Works, ca. 1925

The term "Hawthorne effect" was coined in 1958 by Henry A. Landsberger [5] when he was analyzing the Hawthorne studies conducted between 1924 and 1932 at the Hawthorne Works (a Western Electric factory in Cicero, outside Chicago). The Hawthorne Works had commissioned a study to determine if its workers would become more productive in higher or lower levels of light. The workers' productivity seemed to improve when changes were made, and slumped when the study ended. It was suggested that the productivity gain occurred as a result of the motivational effect on the workers of the interest being shown in them. [6]

This effect was observed for minute increases in illumination. In these lighting studies, light intensity was altered to examine its effect on worker productivity. Most industrial/occupational psychology and organizational behavior textbooks refer to the illumination studies. [7] Only occasionally are the rest of the studies mentioned. [7]

Although illumination research of workplace lighting formed the basis of the Hawthorne effect, other changes such as maintaining clean work stations, clearing floors of obstacles, and even relocating workstations resulted in increased productivity for short periods. Thus the term is used to identify any type of short-lived increase in productivity. [5] [8] [9]

Relay assembly experiments

In one of the studies, researchers chose two women as test subjects and asked them to choose four other workers to join the test group. Together the women worked in a separate room over the course of five years (1927–1932) assembling telephone relays.

Output was measured mechanically by counting how many finished relays each worker dropped down a chute. This measuring began in secret two weeks before moving the women to an experiment room and continued throughout the study. In the experiment room they had a supervisor who discussed changes with their productivity. Some of the variables were:

Changing a variable usually increased productivity, even if the variable was just a change back to the original condition. However it is said that this is the natural process of the human being adapting to the environment, without knowing the objective of the experiment occurring. Researchers concluded that the workers worked harder because they thought that they were being monitored individually.

Researchers hypothesized that choosing one's own coworkers, working as a group, being treated as special (as evidenced by working in a separate room), and having a sympathetic supervisor were the real reasons for the productivity increase. One interpretation, mainly due to Elton Mayo, [10] was that "the six individuals became a team and the team gave itself wholeheartedly and spontaneously to cooperation in the experiment." (There was a second relay assembly test room study whose results were not as significant as the first experiment.)

Bank wiring room experiments

The purpose of the next study was to find out how payment incentives would affect productivity. The surprising result was that productivity actually decreased. Workers apparently had become suspicious that their productivity may have been boosted to justify firing some of the workers later on. [11] The study was conducted by Elton Mayo and W. Lloyd Warner between 1931 and 1932 on a group of fourteen men who put together telephone switching equipment. The researchers found that although the workers were paid according to individual productivity, productivity decreased because the men were afraid that the company would lower the base rate. Detailed observation of the men revealed the existence of informal groups or "cliques" within the formal groups. These cliques developed informal rules of behavior as well as mechanisms to enforce them. The cliques served to control group members and to manage bosses; when bosses asked questions, clique members gave the same responses, even if they were untrue. These results show that workers were more responsive to the social force of their peer groups than to the control and incentives of management.

Interpretation and criticism

Richard Nisbett has described the Hawthorne effect as "a glorified anecdote," saying that "once you have got the anecdote, you can throw away the data." [12] Other researchers have attempted to explain the effects with various interpretations. J. G. Adair warned of gross factual inaccuracy in most secondary publications on the Hawthorne effect and that many studies failed to find it. [13] He argued that it should be viewed as a variant of Orne's (1973) experimental demand effect. For Adair, the Hawthorne effect depended on the participants' interpretation of the situation. An implication is that manipulation checks are important in social sciences experiments. He advanced the view that awareness of being observed was not the source of the effect, but participants' interpretation of the situation is critical. How did the participants' interpretation of the situation interact with the participants' goals?

Possible explanations for the Hawthorne effect include the impact of feedback and motivation towards the experimenter. Receiving feedback on their performance may improve their skills when an experiment provides this feedback for the first time. [14] Research on the demand effect also suggests that people may be motivated to please the experimenter, at least if it does not conflict with any other motive. [15] They may also be suspicious of the purpose of the experimenter. [14] Therefore, Hawthorne effect may only occur when there is usable feedback or a change in motivation.

Parsons defined the Hawthorne effect as "the confounding that occurs if experimenters fail to realize how the consequences of subjects' performance affect what subjects do" [i.e. learning effects, both permanent skill improvement and feedback-enabled adjustments to suit current goals]. His key argument was that in the studies where workers dropped their finished goods down chutes, the participants had access to the counters of their work rate. [14]

Mayo contended that the effect was due to the workers reacting to the sympathy and interest of the observers. He did discuss the study as demonstrating an experimenter effect but as a management effect: how management can make workers perform differently because they feel differently. He suggested that much of the Hawthorne effect concerned the workers feeling free and in control as a group rather than as being supervised. The experimental manipulations were important in convincing the workers to feel this way, that conditions in the special five-person work group was really different from the conditions on the shop floor. The study was repeated with similar effects on mica-splitting workers. [10]

Clark and Sugrue in a review of educational research reported that uncontrolled novelty effects cause on average 30% of a standard deviation (SD) rise (i.e. 50%–63% score rise), with the rise decaying to a much smaller effect after 8 weeks. In more detail: 50% of a SD for up to 4 weeks; 30% of SD for 5–8 weeks; and 20% of SD for > 8 weeks, (which is < 1% of the variance). [16] :333

Harry Braverman pointed out that the Hawthorne tests were based on industrial psychology and the researchers involved were investigating whether workers' performance could be predicted by pre-hire testing. The Hawthorne study showed "that the performance of workers had little relation to ability and in fact often bore an inverse relation to test scores...". [17] Braverman argued that the studies really showed that the workplace was not "a system of bureaucratic formal organisation on the Weberian model, nor a system of informal group relations, as in the interpretation of Mayo and his followers but rather a system of power, of class antagonisms". This discovery was a blow to those hoping to apply the behavioral sciences to manipulate workers in the interest of management.[ citation needed ]

The economists Steven Levitt and John A. List long pursued without success a search for the base data of the original illumination experiments (the weren't true experiments but some authors labeled them experiments), before finding it in a microfilm at the University of Wisconsin in Milwaukee in 2011. [18] Re-analysing it, they found slight evidence for the Hawthorn effect over the long-run, but in no way as drastic as suggested initially [19] This finding supported the analysis of an article by S R G Jones in 1992 examining the relay experiments. [20] [21] Despite the absence of evidence for the Hawthorne Effect in the original study, List has said that he remains confident that the effect is genuine. [22]

Gustav Wickström and Tom Bendix (2000) argue that the supposed "Hawthorne effect" is actually ambiguous and disputable, and instead recommend that to evaluate intervention effectiveness, researchers should introduce specific psychological and social variables that may have affected the outcome. [23]

It is also possible that the illumination experiments can be explained by a longitudinal learning effect. Parsons has declined to analyse the illumination experiments, on the grounds that they have not been properly published and so he cannot get at details, whereas he had extensive personal communication with Roethlisberger and Dickson. [14]

Evaluation of the Hawthorne effect continues in the present day. [24] [25] [26] [27] Despite the criticisms, however, the phenomenon is often taken into account when designing studies and their conclusions. [28] Some have also developed ways to avoid it. For instance, there is the case of holding the observation when conducting a field study from a distance, from behind a barrier such as a two-way mirror or using an unobtrusive measure. [29]

Greenwood, Bolton, and Greenwood (1983) interviewed some of the participants in the experiments and found that the participants were paid significantly better. [30]

Trial effect

Various medical scientists have studied possible trial effect (clinical trial effect) in clinical trials. [31] [32] [33] Some postulate that, beyond just attention and observation, there may be other factors involved, such as slightly better care; slightly better compliance/adherence; and selection bias. The latter may have several mechanisms: (1) Physicians may tend to recruit patients who seem to have better adherence potential and lesser likelihood of future loss to follow-up. (2) The inclusion/exclusion criteria of trials often exclude at least some comorbidities; although this is often necessary to prevent confounding, it also means that trials may tend to work with healthier patient subpopulations.

Secondary observer effect

Despite the observer effect as popularized in the Hawthorne experiments being perhaps falsely identified (see above discussion), the popularity and plausibility of the observer effect in theory has led researchers to postulate that this effect could take place at a second level. Thus it has been proposed that there is a secondary observer effect when researchers working with secondary data such as survey data or various indicators may impact the results of their scientific research. Rather than having an effect on the subjects (as with the primary observer effect), the researchers likely have their own idiosyncrasies that influence how they handle the data and even what data they obtain from secondary sources. For one, the researchers may choose seemingly innocuous steps in their statistical analyses that end up causing significantly different results using the same data; e.g., weighting strategies, factor analytic techniques, or choice of estimation. In addition, researchers may use software packages that have different default settings that lead to small but significant fluctuations. Finally, the data that researchers use may not be identical, even though it seems so. For example, the OECD collects and distributes various socio-economic data; however, these data change over time such that a researcher who downloads the Australian GDP data for the year 2000 may have slightly different values than a researcher who downloads the same Australian GDP 2000 data a few years later. The idea of the secondary observer effect was floated by Nate Breznau in a thus far relatively obscure paper. [34]

Although little attention has been paid to this phenomenon, the scientific implications are very large. [35] Evidence of this effect may be seen in recent studies that assign a particular problem to a number of researchers or research teams who then work independently using the same data to try and find a solution. This is a process called crowdsourcing data analysis and was used in a groundbreaking study by Silberzahn, Rafael, Eric Uhlmann, Dan Martin and Brian Nosek et al. (2015) about red cards and player race in football (i.e., soccer). [36] [37]

See also

Related Research Articles

Statistics Study of the collection, analysis, interpretation, and presentation of data

Statistics is the discipline that concerns the collection, organization, analysis, interpretation and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments. See glossary of probability and statistics.

Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values. It is an important type of cognitive bias that has a significant effect on the proper functioning of society by distorting evidence-based decision-making. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. For example, a person may cherry-pick empirical data that supports one's belief, ignoring the remainder of the data that is not supportive. People also tend to interpret ambiguous evidence as supporting their existing position. The effect is strongest for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs.

Meta-analysis statistical method that summarizes data from multiple sources

A meta-analysis is a statistical analysis that combines the results of multiple scientific studies. Meta-analysis can be performed when there are multiple scientific studies addressing the same question, with each individual study reporting measurements that are expected to have some degree of error. The aim then is to use approaches from statistics to derive a pooled estimate closest to the unknown common truth based on how this error is perceived.

Randomized controlled trial experimental method designed to reduce bias, typically accomplished by randomly allocating subjects to two or more groups, with one being a control group

A randomized controlled trial is a type of scientific experiment that aims to reduce certain sources of bias when testing the effectiveness of new treatments; this is accomplished by randomly allocating subjects to two or more groups, treating them differently, and then comparing them with respect to a measured response. One group—the experimental group—receives the intervention being assessed, while the other—usually called the control group—receives an alternative treatment, such as a placebo or no intervention. The groups are monitored under conditions of the trial design to determine the effectiveness of the experimental intervention, and efficacy is assessed in comparison to the control. There may be more than one treatment group or more than one control group.

Human relations movement refers to the researchers of organizational development who study the behaviour of people in groups, particularly in workplace groups and other related concepts in fields such as industrial and organizational psychology. It originated in the 1930s' Hawthorne studies, which examined the effects of social relations, motivation and employee satisfaction on factory productivity. The movement viewed workers in terms of their psychology and fit with companies, rather than as interchangeable parts, and it resulted in the creation of the discipline of human relations management.

In a blind or blinded experiment, information which may influence the participants of the experiment is withheld until after the experiment is complete. Good blinding can reduce or eliminate experimental biases that arise from a participants' expectations, observer's effect on the participants, observer bias, confirmation bias, and other sources. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. For example, it is not possible to blind a patient to their treatment in a physical therapy intervention. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constraints.

Elton Mayo Australian academic

George Elton Mayo was an Australian born psychologist, industrial researcher, and organizational theorist. Mayo was formally trained at the University of Adelaide, acquiring a Bachelor of Arts Degree graduating with First Class Honours, majoring in philosophy and psychology, and was later awarded an honorary Master of Arts Degree from the University of Queensland (UQ).

Psychophysics quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they produce. Psychophysics has been described as "the scientific study of the relation between stimulus and sensation" or, more completely, as "the analysis of perceptual processes by studying the effect on a subject's experience or behaviour of systematically varying the properties of a stimulus along one or more physical dimensions".

In research, the observer bias is a form of detection bias originating at a study’s stage of observing or recording information. Different observers may assess subjective criteria differently, and cognitive biases can affect how a subject is assessed. For example, being aware of a subject’s disease status may introduce a bias in how the outcome is assessed. Observer bias can also occur when the subject knows they are being examined. When a subject knows they are being observed, it can cause them to act differently from how they normally would, which could interfere with the experiment.Another example examines police work, where police officers change their behavior based on who is watching.

In psychology, the Asch conformity experiments or the Asch paradigm were a series of studies directed by Solomon Asch studying if and how individuals yielded to or defied a majority group and the effect of such influences on beliefs and opinions.

The observer-expectancy effect is a form of reactivity in which a researcher's cognitive bias causes them to subconsciously influence the participants of an experiment. Confirmation bias can lead to the experimenter interpreting results incorrectly because of the tendency to look for information that conforms to their hypothesis, and overlook information that argues against it. It is a significant threat to a study's internal validity, and is therefore typically controlled using a double-blind experimental design.

In psychology, the Stroop effect is the delay in reaction time between congruent and incongruent stimuli.

Organizational behavior (OB) or organisational behaviour is the: "study of human behavior in organizational settings, the interface between human behavior and the organization, and the organization itself". OB research can be categorized in at least three ways:

Confounding A variable that influences both the dependent variable and independent variable causing a spurious association

In statistics, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations.

In research—particularly in psychology—the term demand characteristic refers to an experimental artifact where participants form an interpretation of the experiment's purpose and subconsciously change their behavior to fit that interpretation. Typically, demand characteristics are considered an extraneous variable, exerting an effect on behavior other than that intended by the experimenter. Pioneering research was conducted on demand characteristics by Martin Orne.

Reactivity is a phenomenon that occurs when individuals alter their performance or behavior due to the awareness that they are being observed. The change may be positive or negative, and depends on the situation. It is a significant threat to a research study's external validity and is typically controlled for using blind experiment designs.

Conformity is the act of matching attitudes, beliefs, and behaviors to group norms, politics or being like minded. Norms are implicit, specific rules, shared by a group of individuals, that guide their interactions with others. People often choose to conform to society rather than to pursue personal desires because it is often easier to follow the path others have made already, rather than creating a new one. This tendency to conform occurs in small groups and/or society as a whole, and may result from subtle unconscious influences, or direct and overt social pressure. Conformity can occur in the presence of others, or when an individual is alone. For example, people tend to follow social norms when eating or watching television, even when alone.

A social experiment is a kind of psychological or sociological research for testing people’s reaction to certain situations or events. The experiment relies on a particular social approach, when a main source of information is people with their own knowledge and point of view. To carry out a social experiment, specialists usually divide participants into two groups — active participants and respondents. Throughout the period of experiment, participants are monitored by specialists to identify the effects and differences as a result of the experiment.

Harwood research refers to research in organizational psychology that took place at Harwood Manufacturing, a Virginia-based textiles manufacturer, over the course of four decades in the early to mid-Twentieth Century.

Charles D. (Chuck) Wrege was an American management historian, and Professor at Rutgers University. He is known for his contributions to management history, especially his critical work on Frederick W. Taylor and scientific management.


  1. McCarney R, Warner J, Iliffe S, van Haselen R, Griffin M, Fisher P (2007). "The Hawthorne Effect: a randomised, controlled trial". BMC Med Res Methodol. 7: 30. doi:10.1186/1471-2288-7-30. PMC   1936999 . PMID   17608932.
  2. Fox NS, Brennan JS, Chasen ST (2008). "Clinical estimation of fetal weight and the Hawthorne effect". Eur. J. Obstet. Gynecol. Reprod. Biol. 141 (2): 111–4. doi:10.1016/j.ejogrb.2008.07.023. PMID   18771841.
  3. 1 2 Levitt, S.D., & List, J.A. (2011). Was there really a Hawthorne effect at the Hawthorne plant? An analysis of the original illumination experiments. American Economic Journal: Applied Economics, 3, 224–238.
  4. Schonfeld, I.S., & Chang, C.-H. (2017). Occupational health psychology: Work, stress, and health. New York: Springer Publishing Company.
  5. 1 2 3 Landsberger, H.A. (1958). Hawthorne Revisited, Ithaca, 1958.
  6. Cox, Erika (2000). Psychology for AS Level. Oxford: Oxford University Press. p. 158. ISBN   0198328249.
  7. 1 2 Olson, R.; Verley, J.; Santos, L.; Salas, C. (2004). "What We Teach Students About the Hawthorne Studies: A Review of Content Within a Sample of Introductory I-O and OB Textbooks" (PDF). The Industrial-Organizational Psychologist. 41: 23–39. Archived from the original (PDF) on 2011-11-03.
  8. Elton Mayo, Hawthorne and the Western Electric Company, The Social Problems of an Industrial Civilisation, Routledge, 1949.
  9. Bowey, Dr. Angela M. "MOTIVATION AT WORK: a key issue in remuneration". Archived from the original on 1 July 2007. Retrieved 22 November 2011.CS1 maint: BOT: original-url status unknown (link)
  10. 1 2 Mayo, Elton (1945) Social Problems of an Industrial Civilization. Boston: Division of Research, Graduate School of Business Administration, Harvard University, p. 72
  11. Henslin, James M. (2008). Sociology: a down to earth approach (9th ed.). Pearson Education. p. 140. ISBN   978-0-205-57023-2.
  12. Kolata, G. (December 6, 1998). "Scientific Myths That Are Too Good to Die". New York Times .
  13. Adair, J.G. (1984). "The Hawthorne Effect: A reconsideration of the methodological artifact" (PDF). Journal of Applied Psychology . 69 (2): 334–345. doi:10.1037/0021-9010.69.2.334. Archived from the original (PDF) on 2013-12-15. Retrieved 2013-12-12.
  14. 1 2 3 4 Parsons, H. M. (1974). "What happened at Hawthorne?: New evidence suggests the Hawthorne effect resulted from operant reinforcement contingencies". Science. 183 (4128): 922–932. doi:10.1126/science.183.4128.922. PMID   17756742. S2CID   38816592.
  15. Steele-Johnson, D.; Beauregard, Russell S.; Hoover, Paul B.; Schmidt, Aaron M. (2000). "Goal orientation and task demand effects on motivation, affect, and performance". The Journal of Applied Psychology. 85 (5): 724–738. doi:10.1037/0021-9010.85.5.724. PMID   11055145.
  16. Clark, Richard E.; Sugrue, Brenda M. (1991). "30. Research on instructional media, 1978-1988". In G.J.Anglin (ed.). Instructional technology: past, present, and future. Englewood, Colorado: Libraries Unlimited. pp. 327–343.
  17. Braverman, Harry (1974). Labor and Monopoly Capitalism . New York: Monthly Review Press. pp.  144–145. ISBN   978-0853453406.
  18. BBC Radio 4 programme More Or Less, "The Hawthorne Effect", broadcast 12 October 2013, presented by Tim Harford with contributions by John List
  19. Levitt, Steven D.; List, John A. (2011). "Was There Really a Hawthorne Effect at the Hawthorne Plant? An Analysis of the Original Illumination Experiments" (PDF). American Economic Journal: Applied Economics. 3 (1): 224–238. doi:10.1257/app.3.1.224. S2CID   16678444.
  20. "Light work". The Economist. June 6, 2009. p. 80.
  21. Jones, Stephen R. G. (1992). "Was there a Hawthorne effect?" (PDF). American Journal of Sociology . 98 (3): 451–468. doi:10.1086/230046. JSTOR   2781455.
  22. Podcast, More or Less 12 October 2013, from 6m 15 sec in
  23. Wickström, Gustav; Bendix, Tom (2000). "The "Hawthorne effect" — what did the original Hawthorne studies actually show?". Scandinavian Journal of Work, Environment & Health. Scandinavian Journal of Work, Environment and Health. 26 (4): 363–367. doi: 10.5271/sjweh.555 .
  24. Kohli E, Ptak J, Smith R, Taylor E, Talbot EA, Kirkland KB (2009). "Variability in the Hawthorne effect with regard to hand hygiene performance in high- and low-performing inpatient care units". Infect Control Hosp Epidemiol. 30 (3): 222–5. doi:10.1086/595692. PMID   19199530.
  25. Cocco G (2009). "Erectile dysfunction after therapy with metoprolol: the hawthorne effect". Cardiology. 112 (3): 174–7. doi:10.1159/000147951. PMID   18654082. S2CID   41426273.
  26. Leonard KL (2008). "Is patient satisfaction sensitive to changes in the quality of care? An exploitation of the Hawthorne effect". J Health Econ. 27 (2): 444–59. doi:10.1016/j.jhealeco.2007.07.004. PMID   18192043.
  27. "What is Hawthorne Effect?". MBA Learner. 2018-02-22. Archived from the original on 2018-02-26. Retrieved 2018-02-25.
  28. Salkind, Neil (2010). Encyclopedia of Research Design, Volume 2. Thousand Oaks, CA: SAGE Publications, Inc. p. 561. ISBN   9781412961271.
  29. Kirby, Mark; Kidd, Warren; Koubel, Francine; Barter, John; Hope, Tanya; Kirton, Alison; Madry, Nick; Manning, Paul; Triggs, Karen (2000). Sociology in Perspective. Oxford: Heinemann. pp. G-359. ISBN   9780435331603.
  30. Greenwood, Ronald G.; Bolton, Alfred A.; Greenwood, Regina A. (1983). "Hawthorne a Half Century Later: Relay Assembly Participants Remember". Journal of Management. 9 (2): 217–231. doi:10.1177/014920638300900213. S2CID   145767422.
  31. Menezes P, Miller WC, Wohl DA, Adimora AA, Leone PA, Eron JJ (2011), "Does HAART efficacy translate to effectiveness? Evidence for a trial effect", PLoS ONE , 6 (7): e21824, Bibcode:2011PLoSO...621824M, doi:10.1371/journal.pone.0021824, PMC   3135599 , PMID   21765918.
  32. Braunholtz DA, Edwards SJ, Lilford RJ (2001), "Are randomized clinical trials good for us (in the short term)? Evidence for a "trial effect"", J Clin Epidemiol, 54 (3): 217–224, doi:10.1016/s0895-4356(00)00305-x, PMID   11223318.
  33. McCarney R, Warner J, Iliffe S, van Haselen R, Griffin M, Fisher P (2007), "The Hawthorne Effect: a randomised, controlled trial", BMC Medical Research Methodology, 7: 30, doi:10.1186/1471-2288-7-30, PMC   1936999 , PMID   17608932.
  34. Breznau, Nate (2016-05-03). "Secondary observer effects: idiosyncratic errors in small-N secondary data analysis". International Journal of Social Research Methodology. 19 (3): 301–318. doi:10.1080/13645579.2014.1001221. ISSN   1364-5579. S2CID   145402768.
  35. Shi, Yuan; Sorenson, Olav; Waguespack, David (2017-01-30). "Temporal Issues in Replication: The Stability of Centrality-Based Advantage". Sociological Science. 4: 107–122. doi: 10.15195/v4.a5 . ISSN   2330-6696.
  36. Silberzahn, Raphael; Uhlmann, Eric L.; Martin, Daniel P.; Nosek, Brian A.; et al. (2015). "Many analysts, one dataset: Making transparent how variations in analytical choices affect". OSF.io. Retrieved 2016-12-07.
  37. "Crowdsourcing Data to Improve Macro-Comparative Research". Policy and Politics Journal. 2015-03-26. Retrieved 2016-12-07.