Hawthorne effect

Last updated

The Hawthorne effect (also referred to as the observer effect [1] [2] ) is a type of reactivity in which individuals modify an aspect of their behavior in response to their awareness of being observed. [3] [4] This can undermine the integrity of research, particularly the relationships between variables. [5]

Contents

The original research at the Hawthorne Works for telephone equipment in Cicero, Illinois, on lighting changes and work structure changes such as working hours and break times was originally interpreted by Elton Mayo and others to mean that paying attention to overall worker needs would improve productivity.

Later interpretations such as that done by Landsberger suggested that the novelty of being research subjects and the increased attention from such could lead to temporary increases in workers' productivity. This interpretation was dubbed "the Hawthorne effect". It is also similar to a phenomenon that is referred to as novelty/disruption effect. [6]

History

Aerial view of the Hawthorne Works, ca. 1925 Hawthorne, Illinois Works of the Western Electric Company, 1925.jpg
Aerial view of the Hawthorne Works, ca. 1925

The term was coined in 1958 by Henry A. Landsberger [7] when he was analyzing earlier experiments from 1924–32 at the Hawthorne Works (a Western Electric factory outside Chicago). The Hawthorne Works had commissioned a study to see if its workers would become more productive in higher or lower levels of light. The workers' productivity seemed to improve when changes were made, and slumped when the study ended. It was suggested that the productivity gain occurred as a result of the motivational effect on the workers of the interest being shown in them. [8]

This effect was observed for minute increases in illumination. In these lighting studies, light intensity was altered to examine its effect on worker productivity. Most industrial/occupational psychology and organizational behavior textbooks refer to the illumination studies. [9] Only occasionally are the rest of the studies mentioned. [9]

Although illumination research of workplace lighting formed the basis of the Hawthorne effect, other changes such as maintaining clean work stations, clearing floors of obstacles, and even relocating workstations resulted in increased productivity for short periods. Thus the term is used to identify any type of short-lived increase in productivity. [7] [10] [11]

Relay assembly experiments

In one of the studies, researchers chose two women as test subjects and asked them to choose four other workers to join the test group. Together the women worked in a separate room over the course of five years (1927–1932) assembling telephone relays.

Output was measured mechanically by counting how many finished relays each worker dropped down a chute. This measuring began in secret two weeks before moving the women to an experiment room and continued throughout the study. In the experiment room they had a supervisor who discussed changes with their productivity. Some of the variables were:

Changing a variable usually increased productivity, even if the variable was just a change back to the original condition. However it is said that this is the natural process of the human being adapting to the environment, without knowing the objective of the experiment occurring. Researchers concluded that the workers worked harder because they thought that they were being monitored individually.

Researchers hypothesized that choosing one's own coworkers, working as a group, being treated as special (as evidenced by working in a separate room), and having a sympathetic supervisor were the real reasons for the productivity increase. One interpretation, mainly due to Elton Mayo, [12] was that "the six individuals became a team and the team gave itself wholeheartedly and spontaneously to cooperation in the experiment." (There was a second relay assembly test room study whose results were not as significant as the first experiment.)

Bank wiring room experiments

The purpose of the next study was to find out how payment incentives would affect productivity. The surprising result was that productivity actually decreased. Workers apparently had become suspicious that their productivity may have been boosted to justify firing some of the workers later on. [13] The study was conducted by Elton Mayo and W. Lloyd Warner between 1931 and 1932 on a group of fourteen men who put together telephone switching equipment. The researchers found that although the workers were paid according to individual productivity, productivity decreased because the men were afraid that the company would lower the base rate. Detailed observation of the men revealed the existence of informal groups or "cliques" within the formal groups. These cliques developed informal rules of behavior as well as mechanisms to enforce them. The cliques served to control group members and to manage bosses; when bosses asked questions, clique members gave the same responses, even if they were untrue. These results show that workers were more responsive to the social force of their peer groups than to the control and incentives of management.

Interpretation and criticism

Richard Nisbett has described the Hawthorne effect as "a glorified anecdote", saying that "once you have got the anecdote, you can throw away the data." [14] Other researchers have attempted to explain the effects with various interpretations. J. G. Adair warns of gross factual inaccuracy in most secondary publications on Hawthorne effect and that many studies failed to find it. [15] He argues that it should be viewed as a variant of Orne's (1973) experimental demand effect. So for Adair, the issue is that an experimental effect depends on the participants' interpretation of the situation; this is why manipulation checks are important in social sciences experiments. So he thinks it is not awareness per se, nor special attention per se, but participants' interpretation that must be investigated in order to discover if/how the experimental conditions interact with the participants' goals. This can affect whether participants believe something, if they act on it or do not see it as in their interest, etc.[ vague ]

Possible explanations for the Hawthorne effect include the impact of feedback and motivation towards the experimenter. Receiving feedback on their performance may improve their skills when an experiment provides this feedback for the first time. [16] Research on the demand effect also suggests that people may be motivated to please the experimenter, at least if it does not conflict with any other motive. [17] They may also be suspicious of the purpose of the experimenter. [16] Therefore, Hawthorne effect may only occur when there is usable feedback or a change in motivation.

Parsons defines the Hawthorne effect as "the confounding that occurs if experimenters fail to realize how the consequences of subjects' performance affect what subjects do" [i.e. learning effects, both permanent skill improvement and feedback-enabled adjustments to suit current goals]. His key argument is that in the studies where workers dropped their finished goods down chutes, the participants had access to the counters of their work rate. [16]

Mayo contended that the effect was due to the workers reacting to the sympathy and interest of the observers. He does say that this experiment is about testing overall effect, not testing factors separately. He also discusses it not really as an experimenter effect but as a management effect: how management can make workers perform differently because they feel differently. A lot to do with feeling free, not feeling supervised but more in control as a group. The experimental manipulations were important in convincing the workers to feel this way: that conditions were really different. The experiment was repeated with similar effects on mica-splitting workers. [12]

Clark and Sugrue in a review of educational research say that uncontrolled novelty effects cause on average 30% of a standard deviation (SD) rise (i.e. 50%–63% score rise), which decays to small level after 8 weeks. In more detail: 50% of a SD for up to 4 weeks; 30% of SD for 5–8 weeks; and 20% of SD for > 8 weeks, (which is < 1% of the variance). [18] :333

Harry Braverman points out that the Hawthorne tests were based on industrial psychology and were investigating whether workers' performance could be predicted by pre-hire testing. The Hawthorne study showed "that the performance of workers had little relation to ability and in fact often bore an inverse relation to test scores...". [19] Braverman argues that the studies really showed that the workplace was not "a system of bureaucratic formal organisation on the Weberian model, nor a system of informal group relations, as in the interpretation of Mayo and his followers but rather a system of power, of class antagonisms". This discovery was a blow to those hoping to apply the behavioral sciences to manipulate workers in the interest of management.[ citation needed ]

The economists Steven Levitt and John A. List long pursued without success a search for the base data of the original illumination experiments, before finding it in a microfilm at the University of Wisconsin in Milwaukee in 2011. [20] Re-analysing it, they found slight evidence for the Hawthorn effect over the long-run, but in no way as drastic as suggested initially [21] This finding supported the analysis of an article by S R G Jones in 1992 examining the relay experiments. [22] [23] Despite the absence of evidence for the Hawthorne Effect in the original study, List has said that he remains confident that the effect is genuine. [24]

Gustav Wickström and Tom Bendix (2000) argue that the supposed "Hawthorne effect" is actually ambiguous and disputable, and instead recommend that to evaluate intervention effectiveness, researchers should introduce specific psychological and social variables that may have affected the outcome. [25]

It is also possible that the illumination experiments can be explained by a longitudinal learning effect. Parsons has declined to analyse the illumination experiments, on the grounds that they have not been properly published and so he cannot get at details, whereas he had extensive personal communication with Roethlisberger and Dickson. [16]

Evaluation of the Hawthorne effect continues in the present day. [26] [27] [28] [29] Despite the criticisms, however, the phenomenon is often taken into account when designing studies and their conclusions. [5] Some have also developed ways to avoid it. For instance, there is the case of holding the observation when conducting a field study from a distance, from behind a barrier such as a two-way mirror or using an unobtrusive measure. [30]

Greenwood, Bolton, and Greenwood (1983) interviewed some of the participants in the experiments and found that the participants were paid significantly better. https://doi.org/10.1177/014920638300900213

Trial effect

Various medical scientists have studied possible trial effect (clinical trial effect) in clinical trials. [31] [32] [33] Some postulate that, beyond just attention and observation, there may be other factors involved, such as slightly better care; slightly better compliance/adherence; and selection bias. The latter may have several mechanisms: (1) Physicians may tend to recruit patients who seem to have better adherence potential and lesser likelihood of future loss to follow-up. (2) The inclusion/exclusion criteria of trials often exclude at least some comorbidities; although this is often necessary to prevent confounding, it also means that trials may tend to work with healthier patient subpopulations.

Secondary observer effect

Despite the observer effect as popularized in the Hawthorne experiments being perhaps falsely identified (see above discussion), the popularity and plausibility of the observer effect in theory has led researchers to postulate that this effect could take place at a second level. Thus it has been proposed that there is a secondary observer effect when researchers working with secondary data such as survey data or various indicators may impact the results of their scientific research. Rather than having an effect on the subjects (as with the primary observer effect), the researchers likely have their own idiosyncrasies that influence how they handle the data and even what data they obtain from secondary sources. For one, the researchers may choose seemingly innocuous steps in their statistical analyses that end up causing significantly different results using the same data; e.g., weighting strategies, factor analytic techniques, or choice of estimation. In addition, researchers may use software packages that have different default settings that lead to small but significant fluctuations. Finally, the data that researchers use may not be identical, even though it seems so. For example, the OECD collects and distributes various socio-economic data; however, these data change over time such that a researcher who downloads the Australian GDP data for the year 2000 may have slightly different values than a researcher who downloads the same Australian GDP 2000 data a few years later. The idea of the secondary observer effect was floated by Nate Breznau in a thus far relatively obscure paper. [34]

Although little attention has been paid to this phenomenon, the scientific implications are very large. [35] Evidence of this effect may be seen in recent studies that assign a particular problem to a number of researchers or research teams who then work independently using the same data to try and find a solution. This is a process called crowdsourcing data analysis and was used in a groundbreaking study by Silberzahn, Rafael, Eric Uhlmann, Dan Martin and Brian Nosek et al. (2015) about red cards and player race in football (i.e., soccer). [36] [37]

See also

Related Research Articles

Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms or strengthens one's prior personal beliefs or values. It is a type of cognitive bias. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs.

Meta-analysis statistical method that summarizes data from multiple sources

A meta-analysis is a statistical analysis that combines the results of multiple scientific studies. Meta-analysis can be performed when there are multiple scientific studies addressing the same question, with each individual study reporting measurements that are expected to have some degree of error. The aim then is to use approaches from statistics to derive a pooled estimate closest to the unknown common truth based on how this error is perceived. Existing methods for meta-analysis yield a weighted average from the results of the individual studies, and what differs is the manner in which these weights are allocated and also the manner in which the uncertainty is computed around the point estimate thus generated. In addition to providing an estimate of the unknown common truth, meta-analysis has the capacity to contrast results from different studies and identify patterns among study results, sources of disagreement among those results, or other interesting relationships that may come to light in the context of multiple studies.

Placebo Substance or treatment of no therapeutic value

A placebo is an inert substance or treatment which is designed to have no therapeutic value. Common placebos include inert tablets, inert injections, sham surgery, and other procedures.

Randomized controlled trial experimental method designed to reduce bias

A randomized controlled trial is a type of scientific experiment that aims to reduce certain sources of bias when testing the effectiveness of new treatments; this is accomplished by randomly allocating subjects to two or more groups, treating them differently, and then comparing them with respect to a measured response. One group—the experimental group—has the intervention being assessed, while the other—usually called the control group—has an alternative condition, such as a placebo or no intervention. The groups are followed under conditions of the trial design to see how effective the experimental intervention was. Treatment efficacy is assessed in comparison to the control. There may be more than one treatment group or more than one control group.

Human relations movement refers to the researchers of organizational development who study the behaviour of people in groups, particularly in workplace groups and other related concepts in fields such as industrial and organizational psychology. It originated in the 1930s' Hawthorne studies, which examined the effects of social relations, motivation and employee satisfaction on factory productivity. The movement viewed workers in terms of their psychology and fit with companies, rather than as interchangeable parts, and it resulted in the creation of the discipline of human relations management.

In a blind or blinded experiment, information which may influence the participants of the experiment is withheld until after the experiment is complete. Good blinding can reduce or eliminate experimental biases that arise from a participants' expectations, observer's effect on the participants, observer bias, confirmation bias, and other sources. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. For example, it is not possible to blind a patient to their treatment in a physical therapy intervention. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constraints.

Selection bias is the bias introduced by the selection of individuals, groups or data for analysis in such a way that proper randomization is not achieved, thereby ensuring that the sample obtained is not representative of the population intended to be analyzed. It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may be false.

Elton Mayo Australian academic

George Elton Mayo was an Australian born psychologist, industrial researcher, and organizational theorist. Mayo was formally trained at the University of Adelaide, acquiring a Bachelor of Arts Degree graduating with First Class Honours, majoring in philosophy and psychology, and was later awarded an honorary Master of Arts Degree from the University of Queensland (UQ).

The Interference theory is a theory regarding human memory. Interference occurs in learning. It is the notion that memories encoded in long-term memory (LTM) are forgotten, and cannot be retrieved into short-term memory (STM). This is because of either memory interfering, or hampering, one another. There is an immense number of encoded memories within the storage of LTM. The challenge for memory retrieval is recalling the specific memory and working in the temporary workspace provided in STM. Retaining information regarding the relevant time of encoding memories into LTM influences interference strength. There are two types of interference effects:

In research, observer bias is a form of detection bias originating at a study's stage of observing or recording information. Different observers may assess subjective criteria differently, and cognitive biases can affect how a subject is assessed. For example, being aware of a subject's disease status may introduce a bias in how the outcome is assessed. Observer bias can also occur when the subject knows they are being examined. When a subject knows they are being observed, it can cause them to act different to how they normally would, which could interfere with the experiment.Another example examines police work, where police officers change their behavior based on who is watching.

In psychology, the Asch conformity experiments or the Asch paradigm were a series of studies directed by Solomon Asch studying if and how individuals yielded to or defied a majority group and the effect of such influences on beliefs and opinions.

The observer-expectancy effect is a form of reactivity in which a researcher's cognitive bias causes them to subconsciously influence the participants of an experiment. Confirmation bias can lead to the experimenter interpreting results incorrectly because of the tendency to look for information that conforms to their hypothesis, and overlook information that argues against it. It is a significant threat to a study's internal validity, and is therefore typically controlled using a double-blind experimental design.

The testing effect is the finding that long-term memory is often increased when some of the learning period is devoted to retrieving the to-be-remembered information. The effect is also sometimes referred to as retrieval practice, practice testing, or test-enhanced learning. Retrieval practice may be the best way to refer to the testing effect because the benefits of retrieval-related testing are not limited to tests. It can be more broad like flash cards or quizzes.The testing effect on memory should be distinguished from more general practice effects, defined in the APA Dictionary of Psychology (2007) as "any change or improvement that results from practice or repetition of task items or activities." The term testing effect is also sometimes used in a more general sense; The Oxford Dictionary of Psychology (2003) defines a testing effect as "any effect of taking tests on the respondents, a typical example being test sophistication." Whereas psychologists who develop tests for personality and intelligence want to avoid practice effects, cognitive psychologists working with educators have begun to understand how to take advantage of tests—not as an assessment tool, but as a teaching/learning tool.

In psychology, the Stroop effect is a demonstration of cognitive interference where a delay in the reaction time of a task occurs due to a mismatch in stimuli.

Confounding A variable that influences both the dependent variable and independent variable causing a spurious association

In statistics, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations.

In research—particularly in psychology—the term demand characteristic refers to an experimental artifact where participants form an interpretation of the experiment's purpose and subconsciously change their behavior to fit that interpretation. Typically, demand characteristics are considered an extraneous variable, exerting an effect on behavior other than that intended by the experimenter. Pioneering research was conducted on demand characteristics by Martin Orne.

Repeated measures design is a research design that involves multiple measures of the same variable taken on the same or matched subjects either under different conditions or over two or more time periods. For instance, repeated measurements are collected in a longitudinal study in which change over time is assessed.

Reactivity is a phenomenon that occurs when individuals alter their performance or behavior due to the awareness that they are being observed. The change may be positive or negative, and depends on the situation. It is a significant threat to a research study's external validity and is typically controlled for using blind experiment designs.

A social experiment is a kind of psychological or sociological research for testing people’s reaction to certain situations or events. The experiment relies on a particular social approach, when a main source of information is people with their own knowledge and point of view. To carry out a social experiment, specialists usually divide participants into two groups — active participants and respondents. Throughout the period of experiment, participants are monitored by specialists to identify the effects and differences as a result of the experiment.

Harwood research refers to research in organizational psychology that took place at Harwood Manufacturing, a Virginia-based textiles manufacturer, over the course of four decades in the early to mid-Twentieth Century.

References

  1. Monahan T, Fisher JA (June 10, 2010). "Benefits of 'Observer Effects': Lessons from the Field". Qualitative Research. 10 (3): 357–376. doi:10.1177/1468794110362874. PMC   3032358 . PMID   21297880.
  2. "Hawthorne Effect | What is Hawthorne Effect? - MBA Learner". MBA Learner. 2018-02-22. Archived from the original on 2018-02-26. Retrieved 2018-02-25.
  3. McCarney R, Warner J, Iliffe S, van Haselen R, Griffin M, Fisher P (2007). "The Hawthorne Effect: a randomised, controlled trial". BMC Med Res Methodol. 7: 30. doi:10.1186/1471-2288-7-30. PMC   1936999 . PMID   17608932.
  4. Fox NS, Brennan JS, Chasen ST (2008). "Clinical estimation of fetal weight and the Hawthorne effect". Eur. J. Obstet. Gynecol. Reprod. Biol. 141 (2): 111–4. doi:10.1016/j.ejogrb.2008.07.023. PMID   18771841.
  5. 1 2 Salkind, Neil (2010). Encyclopedia of Research Design, Volume 2. Thousand Oaks, CA: SAGE Publications, Inc. p. 561. ISBN   9781412961271.
  6. Roeckelein, Jon E. (1998). Dictionary of Theories, Laws, and Concepts in Psychology. Westport, CT: Greenwood Publishing Group. p. 175. ISBN   0313304602.
  7. 1 2 Henry A. Landsberger, Hawthorne Revisited, Ithaca, 1958.
  8. Cox, Erika (2000). Psychology for AS Level. Oxford: Oxford University Press. p. 158. ISBN   0198328249.
  9. 1 2 Olson, R.; Verley, J.; Santos, L.; Salas, C. (2004). "What We Teach Students About the Hawthorne Studies: A Review of Content Within a Sample of Introductory I-O and OB Textbooks" (PDF). The Industrial-Organizational Psychologist. 41: 23–39. Archived from the original (PDF) on 2011-11-03.
  10. Elton Mayo, Hawthorne and the Western Electric Company, The Social Problems of an Industrial Civilisation, Routledge, 1949.
  11. Bowey, Dr. Angela M. "MOTIVATION AT WORK: a key issue in remuneration". Archived from the original on 1 July 2007. Retrieved 22 November 2011.CS1 maint: BOT: original-url status unknown (link)
  12. 1 2 Mayo, Elton (1945) Social Problems of an Industrial Civilization. Boston: Division of Research, Graduate School of Business Administration, Harvard University, p. 72
  13. Henslin, James M. (2008). Sociology: a down to earth approach (9th ed.). Pearson Education. p. 140. ISBN   978-0-205-57023-2.
  14. Kolata, G. (December 6, 1998). "Scientific Myths That Are Too Good to Die". New York Times .
  15. Adair, J.G. (1984). "The Hawthorne Effect: A reconsideration of the methodological artifact" (PDF). Journal of Applied Psychology . 69 (2): 334–345. doi:10.1037/0021-9010.69.2.334. Archived from the original (PDF) on 2013-12-15. Retrieved 2013-12-12.
  16. 1 2 3 4 Parsons, H. M. (1974). "What happened at Hawthorne?: New evidence suggests the Hawthorne effect resulted from operant reinforcement contingencies". Science. 183 (4128): 922–932. doi:10.1126/science.183.4128.922. PMID   17756742.
  17. Steele-Johnson, D.; Beauregard, Russell S.; Hoover, Paul B.; Schmidt, Aaron M. (2000). "Goal orientation and task demand effects on motivation, affect, and performance". The Journal of Applied Psychology. 85 (5): 724–738. doi:10.1037/0021-9010.85.5.724. PMID   11055145.
  18. Clark, Richard E.; Sugrue, Brenda M. (1991). "30. Research on instructional media, 1978-1988". In G.J.Anglin (ed.). Instructional technology: past, present, and future. Englewood, Colorado: Libraries Unlimited. pp. 327–343.
  19. Braverman, Harry (1974). Labor and Monopoly Capitalism . New York: Monthly Review Press. pp.  144–145. ISBN   978-0853453406.
  20. BBC Radio 4 programme More Or Less, "The Hawthorne Effect", broadcast 12 October 2013, presented by Tim Harford with contributions by John List
  21. Levitt, Steven D.; List, John A. (2011). "Was There Really a Hawthorne Effect at the Hawthorne Plant? An Analysis of the Original Illumination Experiments" (PDF). American Economic Journal: Applied Economics. 3 (1): 224–238. doi:10.1257/app.3.1.224.
  22. "Light work". The Economist. June 6, 2009. p. 80.
  23. Jones, Stephen R. G. (1992). "Was there a Hawthorne effect?". American Journal of Sociology . 98 (3): 451–468. doi:10.1086/230046. JSTOR   2781455.
  24. Podcast, More or Less 12 October 2013, from 6m 15 sec in
  25. https://www.sjweh.fi/show_abstract.php?abstract_id=555
  26. Kohli E, Ptak J, Smith R, Taylor E, Talbot EA, Kirkland KB (2009). "Variability in the Hawthorne effect with regard to hand hygiene performance in high- and low-performing inpatient care units". Infect Control Hosp Epidemiol. 30 (3): 222–5. doi:10.1086/595692. PMID   19199530.
  27. Cocco G (2009). "Erectile dysfunction after therapy with metoprolol: the hawthorne effect". Cardiology. 112 (3): 174–7. doi:10.1159/000147951. PMID   18654082.
  28. Leonard KL (2008). "Is patient satisfaction sensitive to changes in the quality of care? An exploitation of the Hawthorne effect". J Health Econ. 27 (2): 444–59. doi:10.1016/j.jhealeco.2007.07.004. PMID   18192043.
  29. "Hawthorne Effect | What is Hawthorne Effect? - MBA Learner". MBA Learner. 2018-02-22. Archived from the original on 2018-02-26. Retrieved 2018-02-25.
  30. Kirby, Mark; Kidd, Warren; Koubel, Francine; Barter, John; Hope, Tanya; Kirton, Alison; Madry, Nick; Manning, Paul; Triggs, Karen (2000). Sociology in Perspective. Oxford: Heinemann. pp. G-359. ISBN   9780435331603.
  31. Menezes P, Miller WC, Wohl DA, Adimora AA, Leone PA, Eron JJ (2011), "Does HAART efficacy translate to effectiveness? Evidence for a trial effect", PLoS ONE , 6 (7): e21824, Bibcode:2011PLoSO...621824M, doi:10.1371/journal.pone.0021824, PMC   3135599 , PMID   21765918.
  32. Braunholtz DA, Edwards SJ, Lilford RJ (2001), "Are randomized clinical trials good for us (in the short term)? Evidence for a "trial effect"", J Clin Epidemiol, 54 (3): 217–224, doi:10.1016/s0895-4356(00)00305-x, PMID   11223318.
  33. McCarney R, Warner J, Iliffe S, van Haselen R, Griffin M, Fisher P (2007), "The Hawthorne Effect: a randomised, controlled trial", BMC Medical Research Methodology, 7: 30, doi:10.1186/1471-2288-7-30, PMC   1936999 , PMID   17608932.
  34. Breznau, Nate (2016-05-03). "Secondary observer effects: idiosyncratic errors in small-N secondary data analysis". International Journal of Social Research Methodology. 19 (3): 301–318. doi:10.1080/13645579.2014.1001221. ISSN   1364-5579.
  35. Shi, Yuan; Sorenson, Olav; Waguespack, David (2017-01-30). "Temporal Issues in Replication: The Stability of Centrality-Based Advantage". Sociological Science. 4: 107–122. doi: 10.15195/v4.a5 . ISSN   2330-6696.
  36. "OSF | Crowdsourcing Analytics - Final Manuscript.pdf". osf.io. Retrieved 2016-12-07.
  37. "Crowdsourcing Data to Improve Macro-Comparative Research". Policy and Politics Journal. 2015-03-26. Retrieved 2016-12-07.


  1. Ciment, Shoshy. “Costco Is Offering an Additional $2 an Hour to Its Hourly Employees across the US as the Coronavirus Outbreak Causes Massive Shopping Surges.” Business Insider, Business Insider, 23 Mar. 2020, www.businessinsider.com/costco-pays-workers-2-dollars-an-hour-more-coronavirus-2020-3.
  2. Miller, Katherine, and Joshua Barbour. Organizational Communication: Approaches and Processes 7th Edition. Cengage Learning, 2014.
  3. Christian Davenport, Aaron Gregg. “Many Businesses Are Closing to Prevent the Spread of the Coronavirus. Not the Defense Industry.” The Washington Post, WP Company, 24 Mar. 2020, www.washingtonpost.com/business/2020/03/24/many-businesses-are-closing-prevent-spread-coronavirus-not-defense-industry/.