Campbell's law

Last updated

Campbell's law is an adage developed by Donald T. Campbell, a psychologist and social scientist who often wrote about research methodology, which states:

Contents

"The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." [1]

Applications

Campbell's law can be seen as an example of the cobra effect, which is the sometimes unintended negative effect of public policy and other government interventions in economics, commerce, and healthcare.

Education

In 1976, Campbell wrote: "Achievement tests may well be valuable indicators of general school achievement under conditions of normal teaching aimed at general competence. But when test scores become the goal of the teaching process, they both lose their value as indicators of educational status and distort the educational process in undesirable ways. (Similar biases of course surround the use of objective tests in courses or as entrance examinations.)" [1]

The social science principle of Campbell's law is used to point out the negative consequences of high-stakes testing in U.S. classrooms. This may take the form of teaching to the test or outright cheating. [2] "The High-Stakes Education Rule" is identified and analyzed in the book "Measuring Up: What Educational Testing Really Tells Us". [3]

Campbell’s Law helps people discern that the Obama administration program of Race to the Top and Bush administration program, the No Child Left Behind Act can actually impair, not improve, educational outcome [4] .

Similar rules

There are closely related ideas known by different names, such as Goodhart's law and the Lucas critique. Another concept related to Campbell's law emerged in 2006 when UK researchers Rebecca Boden and Debbie Epstein published an analysis of evidence-based policy, a practice espoused by Prime Minister Tony Blair. In the paper, Boden and Epstein described how a government that tries to base its policy on evidence can actually end up producing corrupted data because it "seeks to capture and control the knowledge producing processes to the point where this type of 'research' might best be described as 'policy-based evidence'." [5]

When someone distorts decisions in order to improve the performance measure, they often surrogate, coming to believe that the measure is a better measure of true performance than it really is. [6]

Campbell’s Law imparts a more positive but complicated message. It is important to measure progress making use of quantitative and qualitative indicators. [7] However, utilizing quantitative data for evaluation can distort and manipulate these indicators. Concrete measures must be adopted to reduce alteration and manipulation of information. In his article, “Assessing the Impact of Planned Social Change [8] ”, Campbell emphasized that “the more quantitative social indicator used for social decision-making is subjected to corruption pressure and liable to distort and damage social processes it meant to monitor.”

See also

Notes

  1. 1 2 Campbell, Donald T (1979). "Assessing the impact of planned social change". Evaluation and Program Planning. 2 (1): 67–90. doi:10.1016/0149-7189(79)90048-X.
  2. Aviv, Rachel (21 July 2014). "Wrong Answer". The New Yorker.
  3. Koretz, Daniel M. (2009). Measuring Up. Harvard University Press. ISBN   978-0-674-03972-8.
  4. "Trust but verify: The real lessons of Campbell's Law | The Thomas B. Fordham Institute". edexcellence.net. Retrieved 2018-06-30.
  5. Boden, Rebecca; Epstein, Debbie (2006). "Managing the research imagination? Globalisation and research in higher education". Globalisation, Societies and Education. 4 (2): 223–236. doi:10.1080/14767720600752619.
  6. Bentley, Jeremiah W. (2017-02-24). "Decreasing Operational Distortion and Surrogation through Narrative Reporting". Rochester, NY. SSRN   2924726 .Cite journal requires |journal= (help)
  7. "Quantitative & Qualitative Indicators". Monitoring & Evaluation. Retrieved 2018-06-30.
  8. Campbell, Donald T. (1979-01-01). "Assessing the impact of planned social change". Evaluation and Program Planning. 2 (1): 67–90. doi:10.1016/0149-7189(79)90048-X. ISSN   0149-7189.

Related Research Articles

Psychometrics is a field of study concerned with the theory and technique of psychological measurement. As defined by the US National Council on Measurement in Education (NCME), psychometrics refers to psychological measurement. Generally, it refers to the field in psychology and education that is devoted to testing, measurement, assessment, and related activities.

Research systematic study undertaken to increase knowledge

Research is "creative and systematic work undertaken to increase the stock of knowledge, including knowledge of humans, culture and society, and the use of this stock of knowledge to devise new applications.". It involves the collection, organization, and analysis of information to increase our understanding of a topic or issue. At a general level, research has three steps: 1. Pose a question. 2. Collect data to answer the question. 3. Present an answer to the question. This should be a familiar process. You engage in solving problems every day and you start with a question, collect some information, and then form an answer. Research is important for three reasons.1. Research adds to our knowledge: Adding to knowledge means that educators undertake research to contribute to existing information about issues 2.Research improves practice: Research is also important because it suggests improvements for practice. Armed with research results, teachers and other educators become more effective professionals. 3. Research informs policy debates: research also provides information to policy makers when they research and debate educational topics.

Goodhart's law is an adage named after economist Charles Goodhart, which has been phrased by Marilyn Strathern as "When a measure becomes a target, it ceases to be a good measure." One way in which this can occur is individuals trying to anticipate the effect of a policy and then taking actions that alter its outcome.

Standardized test Test administered and scored in a predetermined, standard manner

A standardized test is a test that is administered and scored in a consistent, or "standard", manner. Standardized tests are designed in such a way that the questions, conditions for administering, scoring procedures, and interpretations are consistent and are administered and scored in a predetermined, standard manner.

Educational research refers to the systematic collection and analysis of data related to the field of education. Research may involve a variety of methods and various aspects of education including student learning, teaching methods, teacher training, and classroom dynamics.

Educational assessment or educational evaluation is the systematic process of documenting and using empirical data on the knowledge, skill, attitudes, and beliefs to refine programs and improve student learning. Assessment data can be obtained from directly examining student work to assess the achievement of learning outcomes or can be based on data from which one can make inferences about learning. Assessment is often used interchangeably with test, but not limited to tests. Assessment can focus on the individual learner, the learning community, a course, an academic program, the institution, or the educational system as a whole. The word 'assessment' came into use in an educational context after the Second World War.

No Child Left Behind Act former United States Law

The No Child Left Behind Act of 2002 (NCLB) was a U.S. Act of Congress that reauthorized the Elementary and Secondary Education Act; it included Title I provisions applying to disadvantaged students. It supported standards-based education reform based on the premise that setting high standards and establishing measurable goals could improve individual outcomes in education. The Act required states to develop assessments in basic skills. To receive federal school funding, states had to give these assessments to all students at select grade levels.

Electronic assessment, also known as digital assessment, e-assessment, online assessment or computer-based assessment, is the use of information technology in assessment such as educational assessment, health assessment, psychiatric assessment, and psychological assessment. This covers a wide range of activity ranging from the use of a word processor for assignments to on-screen testing. Specific types of e-assessment include multiple choice, online/electronic submission, computerized adaptive testing and computerized classification testing.

Construct validity is "the degree to which a test measures what it claims, or purports, to be measuring." In the classical model of test validity, construct validity is one of three main types of validity evidence, alongside content validity and criterion validity. Modern validity theory defines construct validity as the overarching concern of validity research, subsuming all other types of validity evidence.

This is an index of education articles.

Educational evaluation is the evaluation process of characterizing and appraising some aspect/s of an educational process.

Quantitative psychology is a field of scientific study that focuses on the mathematical modeling, research design and methodology, and statistical analysis of human or animal psychological processes. It includes tests and other devices for measuring human abilities. Quantitative psychologists develop and analyze a wide variety of research methods, including those of psychometrics, a field concerned with the theory and technique of psychological measurement.

David C. Berliner is an educational psychologist. He was professor and dean of the Mary Lou Fulton Institute and Graduate School of Education.

Logic model

Logic models are hypothesized descriptions of the chain of causes and effects leading to an outcome of interest. While they can be in a narrative form, logic model usually take form in a graphical depiction of the "if-then" (causal) relationships between the various elements leading to the outcome. However, the logic model is more than the graphical depiction: it is also the theories, scientific evidences, assumptions and beliefs that support it and the various processes behind it.

In the United States education system, School Psychological Examiners assess the needs of students in schools for special education services or other interventions. The post requires a relevant postgraduate qualification and specialist training. This role is distinct within school psychology from that of the psychiatrist, clinical psychologist and psychometrist.

Community indicators are "measurements that provide information about past and current trends and assist planners and community leaders in making decisions that affect future outcomes". They provide insight into the overall direction of a community: whether it is improving, declining, or staying the same, or is some mix of all three.

"Teaching to the test" is a colloquial term for any method of education whose curriculum is heavily focused on preparing students for a standardized test.

Teacher quality assessment commonly includes reviews of qualifications, tests of teacher knowledge, observations of practice, and measurements of student learning gains. Assessments of teacher quality are currently used for policymaking, employment and tenure decisions, teacher evaluations, merit pay awards, and as data to inform the professional growth of teachers.

Ernest R. House American academic

Ernest R. House is an American academic specializing in program evaluation and education policy. He has been a Professor Emeritus of Education at the University of Colorado Boulder since 2002. House was a faculty member at the University of Colorado Boulder from 1985 to 2001. Before that, he was a professor of education at the University of Illinois at Urbana-Champaign from 1969 to 1985. He has been a visiting scholar at UCLA, Harvard, University of New Mexico, and the Center for Advanced Study in the Behavioral Sciences (1999-2000), and also in England, Australia, Spain, Sweden, Austria, and Chile.

Educator effectiveness is a K-12 school system education policy initiative that measures the quality of an educator performance in terms of improving student learning. It describes a variety of methods, such as observations, student assessments, student work samples and examples of teacher work, that education leaders use to determine the effectiveness of a K-12 educator.

References