Campbell's law

Last updated

Campbell's law is an adage developed by Donald T. Campbell, a psychologist and social scientist who often wrote about research methodology, which states:

Contents

The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor. [1]

Applications

Campbell's law is related to the cobra effect, which is the sometimes unintended negative effect of public policy and other government interventions in economics, commerce, and healthcare. [2]

Education

In 1976, Campbell wrote: "Achievement tests may well be valuable indicators of general school achievement under conditions of normal teaching aimed at general competence. But when test scores become the goal of the teaching process, they both lose their value as indicators of educational status and distort the educational process in undesirable ways. (Similar biases of course surround the use of objective tests in courses or as entrance examinations.)" [1]

The social science principle of Campbell's law is used to point out the negative consequences of high-stakes testing in U.S. classrooms. This may take the form of teaching to the test or outright cheating. [3] "The High-Stakes Education Rule" is identified and analyzed in the book "Measuring Up: What Educational Testing Really Tells Us". [4]

Campbell's law helps people discern that Race to the Top, an Obama administration program, and the No Child Left Behind Act, enacted during the George W. Bush Administration, can actually impair, not improve, educational outcome. [5]

Similar rules

There are closely related ideas known by different names, such as Goodhart's law and the Lucas critique. Another concept related to Campbell's law emerged in 2006 when UK researchers Rebecca Boden and Debbie Epstein published an analysis of evidence-based policy, a practice espoused by Prime Minister Tony Blair. In the paper, Boden and Epstein described how a government that tries to base its policy on evidence can actually end up producing corrupted data because it "seeks to capture and control the knowledge producing processes to the point where this type of 'research' might best be described as 'policy-based evidence'." [6]

When someone distorts decisions in order to improve the performance measure, they often surrogate, coming to believe that the measure is a better measure of true performance than it really is. [7]

Campbell's law imparts a more positive but complicated message. It is important to measure progress making use of quantitative and qualitative indicators. [8] However, using quantitative data for evaluation can distort and manipulate these indicators. Concrete measures must be adopted to reduce alteration and manipulation of information. In his article "Assessing the Impact of Planned Social Change", [9] Campbell emphasized that "the more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."

See also

Notes

  1. 1 2 Campbell, Donald T (1979). "Assessing the impact of planned social change". Evaluation and Program Planning. 2 (1): 67–90. doi:10.1016/0149-7189(79)90048-X.
  2. Coy, Peter (2021-03-26). "Goodhart's Law Rules the Modern World. Here Are Nine Examples". Bloomberg.com. Archived from the original on 2021-04-25. Retrieved 2021-06-03.
  3. Aviv, Rachel (21 July 2014). "Wrong Answer". The New Yorker.
  4. Koretz, Daniel M. (2009). Measuring Up. Harvard University Press. ISBN   978-0-674-03972-8.
  5. Porter-Magee, Kathleen. "Trust but verify: The real lessons of Campbell's Law". The Thomas B. Fordham Institute. Retrieved 2018-06-30.
  6. Boden, Rebecca; Epstein, Debbie (2006). "Managing the research imagination? Globalisation and research in higher education". Globalisation, Societies and Education. 4 (2): 223–236. doi:10.1080/14767720600752619. S2CID   144077070.
  7. Bentley, Jeremiah W. (2017-02-24). "Decreasing Operational Distortion and Surrogation through Narrative Reporting". The Accounting Review . Rochester, New York. SSRN   2924726.
  8. "Quantitative & Qualitative Indicators". Monitoring & Evaluation. Retrieved 2018-06-30.
  9. Campbell, Donald T. (1979-01-01). "Assessing the impact of planned social change". Evaluation and Program Planning. 2 (1): 67–90. doi:10.1016/0149-7189(79)90048-X. ISSN   0149-7189.

Related Research Articles

The following outline is provided as an overview of and topical guide to education:

Psychometrics is a field of study within psychology concerned with the theory and technique of measurement. Psychometrics generally covers specialized fields within psychology and education devoted to testing, measurement, assessment, and related activities. Psychometrics is concerned with the objective measurement of latent constructs that cannot be directly observed. Examples of latent constructs include intelligence, introversion, mental disorders, and educational achievement. The levels of individuals on nonobservable latent variables are inferred through mathematical modeling based on what is observed from individuals' responses to items on tests and scales.

<span class="mw-page-title-main">Goodhart's law</span> Adage about statistical measures

Goodhart's law is an adage often stated as, "When a measure becomes a target, it ceases to be a good measure". It is named after British economist Charles Goodhart, who is credited with expressing the core idea of the adage in a 1975 article on monetary policy in the United Kingdom:

Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.

<span class="mw-page-title-main">Standardized test</span> Test administered and scored in a predetermined, standard manner

A standardized test is a test that is administered and scored in a consistent, or "standard", manner. Standardized tests are designed in such a way that the questions and interpretations are consistent and are administered and scored in a predetermined, standard manner.

Educational research refers to the systematic collection and analysis of data related to the field of education. Research may involve a variety of methods and various aspects of education including student learning, interaction, teaching methods, teacher training, and classroom dynamics.

Educational assessment or educational evaluation is the systematic process of documenting and using empirical data on the knowledge, skill, attitudes, aptitude and beliefs to refine programs and improve student learning. Assessment data can be obtained from directly examining student work to assess the achievement of learning outcomes or can be based on data from which one can make inferences about learning. Assessment is often used interchangeably with test, but not limited to tests. Assessment can focus on the individual learner, the learning community, a course, an academic program, the institution, or the educational system as a whole. The word "assessment" came into use in an educational context after the Second World War.

<span class="mw-page-title-main">No Child Left Behind Act</span> 2002 United States education reform law; repealed 2015

The No Child Left Behind Act of 2001 (NCLB) was a U.S. Act of Congress promoted by the Presidency of George W. Bush. It reauthorized the Elementary and Secondary Education Act and included Title I provisions applying to disadvantaged students. It mandated standards-based education reform based on the premise that setting high standards and establishing measurable goals could improve individual outcomes in education. To receive federal school funding, states had to create and give assessments to all students at select grade levels.

Electronic assessment, also known as digital assessment, e-assessment, online assessment or computer-based assessment, is the use of information technology in assessment such as educational assessment, health assessment, psychiatric assessment, and psychological assessment. This covers a wide range of activities ranging from the use of a word processor for assignments to on-screen testing. Specific types of e-assessment include multiple choice, online/electronic submission, computerized adaptive testing such as the Frankfurt Adaptive Concentration Test, and computerized classification testing.

Construct validity concerns how well a set of indicators represent or reflect a concept that is not directly measurable. Construct validation is the accumulation of evidence to support the interpretation of what a measure reflects. Modern validity theory defines construct validity as the overarching concern of validity research, subsuming all other types of validity evidence such as content validity and criterion validity.

This is an index of education articles.

Quantitative psychology is a field of scientific study that focuses on the mathematical modeling, research design and methodology, and statistical analysis of psychological processes. It includes tests and other devices for measuring cognitive abilities. Quantitative psychologists develop and analyze a wide variety of research methods, including those of psychometrics, a field concerned with the theory and technique of psychological measurement.

David C. Berliner is an educational psychologist. He was a professor and dean of the Mary Lou Fulton Institute and Graduate School of Education.

<span class="mw-page-title-main">Shlomo Sawilowsky</span> American educational statistician

Shlomo S. Sawilowsky was a professor of educational statistics and Distinguished Faculty Fellow at Wayne State University in Detroit, Michigan, where he has received teaching, mentoring, and research awards.

"Policy-based evidence making" is a pejorative term which refers to the commissioning of research in order to support a policy which has already been decided upon. It is the converse of evidence-based policy making.

<span class="mw-page-title-main">High-stakes testing</span> Test with important consequences for the test taker

A high-stakes test is a test with important consequences for the test taker. Passing has important benefits, such as a high school diploma, a scholarship, or a license to practice a profession. Failing has important disadvantages, such as being forced to take remedial classes until the test can be passed, not being allowed to drive a car, or difficulty finding employment.

In the United States education system, School Psychological Examiners assess the needs of students in schools for special education services or other interventions. The post requires a relevant postgraduate qualification and specialist training. This role is distinct within school psychology from that of the psychiatrist, clinical psychologist and psychometrist.

"Teaching to the test" is a colloquial term for any method of education whose curriculum is heavily focused on preparing students for a standardized test.

Teacher quality assessment commonly includes reviews of qualifications, tests of teacher knowledge, observations of practice, and measurements of student learning gains. Assessments of teacher quality are currently used for policymaking, employment and tenure decisions, teacher evaluations, merit pay awards, and as data to inform the professional growth of teachers.

<span class="mw-page-title-main">Ernest R. House</span> American academic

Ernest R. House is an American academic specializing in program evaluation and education policy. He has been a Professor Emeritus of Education at the University of Colorado Boulder since 2002. House was a faculty member at the University of Colorado Boulder from 1985 to 2001. Before that, he was a professor of education at the University of Illinois at Urbana-Champaign from 1969 to 1985. He has been a visiting scholar at UCLA, Harvard, University of New Mexico, and the Center for Advanced Study in the Behavioral Sciences (1999-2000), and also in England, Australia, Spain, Sweden, Austria, and Chile.

Educator effectiveness is a United States K-12 school system education policy initiative that measures the quality of an educator performance in terms of improving student learning. It describes a variety of methods, such as observations, student assessments, student work samples and examples of teacher work, that education leaders use to determine the effectiveness of a K-12 educator.

References