Randy Elliot Bennett

Last updated
Randy Elliot Bennett
Born Brooklyn, New York
OccupationEducational Researcher
NationalityAmerican
Notable works Formative Assessment: A Critical Review
Cognitively Based Assessment of, for, and as Learning: A preliminary theory of action for summative and formative assessment
Educational Assessment: What to Watch in a Rapidly Changing World
The Changing Nature of Educational Assessment
Toward a Theory of Socioculturally Responsive Assessment
Notable awards National Academy of Education elected member
AERA E.F. Lindquist Award
AERA Cognition and Assessment SIG Outstanding Contribution to Research in Cognition and Assessment Award
NCME Bradley Hanson Award
AERA Fellow
Teachers College, Columbia University Distinguished Alumni Award

Randy Elliot Bennett is an American educational researcher who specializes in educational assessment. He is currently the Norman O. Frederiksen Chair in Assessment Innovation at Educational Testing Service in Princeton, NJ. His research and writing focus on bringing together advances in cognitive science, technology, and measurement to improve teaching and learning. He received the ETS Senior Scientist Award in 1996, the ETS Career Achievement Award in 2005, the Teachers College, Columbia University Distinguished Alumni Award in 2016, Fellow status in the American Educational Research Association (AERA) in 2017, the National Council on Measurement in Education's (NCME) Bradley Hanson Award for Contributions to Educational Measurement in 2019 (with H. Guo, M. Zhang, and P. Deane), the E. F. Lindquist Award from AERA and ACT in 2020, elected membership in the National Academy of Education in 2022, and the AERA Cognition and Assessment Special Interest Group Outstanding Contribution to Research in Cognition and Assessment Award in 2024. [1] [2] [3] [4] [5] [6] Randy Bennett was elected President of both the International Association for Educational Assessment (IAEA), a worldwide organization primarily constituted of governmental and NGO measurement organizations, and the National Council on Measurement in Education (NCME), whose members are employed in universities, testing organizations, state and federal education departments, and school districts.

Contents

Publications

Bennett is author or editor of nine books, as well as over 100 journal articles, chapters, and technical reports. Those publications have concentrated on several themes. The 1998 publication, Reinventing Assessment: Speculations on the Future of Large-Scale Educational Testing, [7] presented a three-stage framework for how paper-and-pencil tests would gradually transition to digital form, eventually melding with online activities, blurring the distinction between learning and assessment, and leading to improvements in both pursuits. A series of subsequent publications built upon the work of Robert Glaser, Norman O. Frederiksen, Samuel Messick, James Pellegrino, Lorrie Shepard and others to create a unified model for formative and summative assessments under the Cognitively Based Assessment of, for, and as Learning (CBAL) initiative. [8] [9] This work, noted in the citations for both the E.F. Lindquist Award and his AERA Fellow designation, [2] [4] is described in two journal articles, Transforming K-12 Assessment [10] and Cognitively Based Assessment of, for, and as Learning. [11] The latter publication articulated assumptions for the CBAL assessment model in a detailed "theory of action," which described the assessment system components, intended outcomes, and the action mechanisms that should lead to those outcomes, predating the generally recommended use of that device in operational testing programs. [12] [13]

The journal article, Formative Assessment: A Critical Review, [14] questioned the magnitude of efficacy claims, the meaningfulness of existing definitions, and the general absence of disciplinary considerations in the conceptualization and implementation of formative assessment. [15] The article encouraged a deeper examination of premises, more careful consideration of effectiveness claims, and a move toward incorporating domain considerations directly into the structure and practice of formative assessment. [16] [17] [18]

Two reports--Online Assessment in Mathematics and Writing [19] and Problem Solving in Technology Rich Environments [20] --documented studies that helped set the stage for moving the US National Assessment of Educational Progress from paper presentation to computer delivery. [21] [22]

Several recent articles called attention to the need for testing companies and state education departments to exercise caution in using artificial intelligence (AI) methods for scoring consequential tests. That theme was developed in a book chapter, Validity and Automated Scoring, [23] and summarized in The Changing Nature of Educational Assessment. [24] These publications note that in automated essay scoring, for example, caution is needed because of the inscrutability of some AI scoring methods, their use of correlates that can be easily manipulated for undeserved score gain, and the routine practice of building scoring algorithms to model the judgment of operational human graders, thereby unintentionally incorporating human biases.

Bennett's latest work centers on equity in assessment. The commentary, The Good Side of COVID-19, [25] makes the case that standardized testing, and educational assessment more generally, must be rethought so that they better align with the multicultural, pluralistic society the US is rapidly becoming. In a follow-up article, Toward a Theory of Socioculturally Responsive Assessment, [26] he assembles assessment design principles from multiple literatures and uses them to fashion a definition, theory, and suggested path for implementing measures more attuned to the social, cultural, and other relevant characteristics of diverse individuals and the contexts in which they live. That line of thinking is elaborated upon in Let's Agree to (Mostly) Agree: A Response to Solano-Flores. [27]

Books

Andrade, H. L., Bennett, R. E., & Cizek, G. J. (Eds.). (2019). Handbook of formative assessment in the disciplines. New York: Routledge.

Bennett, R. E., & von Davier, M. (Eds.). (2017). Advancing human assessment: The methodological, psychological, and policy contributions of ETS. Cham, Switzerland: Springer Open.

Bennett, R. E., & Ward, W. C. (Eds.). (1993). Construction vs. choice in cognitive measurement: Issues in constructed response, performance testing, and portfolio assessment. Hillsdale, NJ: Lawrence Erlbaum Associates.

Willingham, W. W., Ragosta, M., Bennett, R. E., Braun, H. I. Rock, D. A., & Powers, D. E. (1988). Testing handicapped people. Boston, MA: Allyn & Bacon.

Bennett, R. E. (Ed.). (1987). Planning and evaluating computer education programs. Columbus, OH: Merrill.

Bennett, R. E., & Maher, C. A. (Eds). (1986). Emerging perspectives in the assessment of exceptional children. New York: Haworth Press.

Cline, H. F., Bennett, R. E., Kershaw, R. C., Schneiderman, M. B., Stecher, B., & Wilson, S. (1986). The electronic schoolhouse: The IBM secondary school computer education program. Hillsdale, NJ: Lawrence Erlbaum Associates.

Bennett, R. E., & Maher, C. A. (Eds.). (1984). Microcomputers and exceptional children. New York: Haworth Press.

Maher, C. A., & Bennett, R. E. (1984). Planning and evaluating special education services. Englewood Cliffs, NJ: Prentice-Hall.

Related Research Articles

Instructional design (ID), also known as instructional systems design and originally known as instructional systems development (ISD), is the practice of systematically designing, developing and delivering instructional materials and experiences, both digital and physical, in a consistent and reliable fashion toward an efficient, effective, appealing, engaging and inspiring acquisition of knowledge. The process consists broadly of determining the state and needs of the learner, defining the end goal of instruction, and creating some "intervention" to assist in the transition. The outcome of this instruction may be directly observable and scientifically measured or completely hidden and assumed. There are many instructional design models, but many are based on the ADDIE model with the five phases: analysis, design, development, implementation, and evaluation.

<span class="mw-page-title-main">Educational Testing Service</span> Educational testing and assessment organization

Educational Testing Service (ETS), founded in 1947, is the world's largest private educational testing and assessment organization. It is headquartered in Lawrence Township, New Jersey, but has a Princeton address.

Educational assessment or educational evaluation is the systematic process of documenting and using empirical data on the knowledge, skill, attitudes, aptitude and beliefs to refine programs and improve student learning. Assessment data can be obtained from directly examining student work to assess the achievement of learning outcomes or can be based on data from which one can make inferences about learning. Assessment is often used interchangeably with test, but not limited to tests. Assessment can focus on the individual learner, the learning community, a course, an academic program, the institution, or the educational system as a whole. The word "assessment" came into use in an educational context after the Second World War.

Computerized adaptive testing (CAT) is a form of computer-based test that adapts to the examinee's ability level. For this reason, it has also been called tailored testing. In other words, it is a form of computer-administered test in which the next item or set of items selected to be administered depends on the correctness of the test taker's responses to the most recent items administered.

Gwyneth M. Boodoo is an American psychologist and expert on educational measurement.

<span class="mw-page-title-main">National Assessment of Educational Progress</span> Assessment

The National Assessment of Educational Progress (NAEP) is the largest continuing and nationally representative assessment of what U.S. students know and can do in various subjects. NAEP is a congressionally mandated project administered by the National Center for Education Statistics (NCES), within the Institute of Education Sciences (IES) of the United States Department of Education. The first national administration of NAEP occurred in 1969. The National Assessment Governing Board (NAGB) is an independent, bipartisan board that sets policy for NAEP and is responsible for developing the framework and test specifications.The National Assessment Governing Board, whose members are appointed by the U.S. Secretary of Education, includes governors, state legislators, local and state school officials, educators, business representatives, and members of the general public. Congress created the 26-member Governing Board in 1988.

<i>Standards for Educational and Psychological Testing</i> Educational testing standards

The Standards for Educational and Psychological Testing is a set of testing standards developed jointly by the American Educational Research Association (AERA), American Psychological Association (APA), and the National Council on Measurement in Education (NCME).

Formative assessment, formative evaluation, formative feedback, or assessment for learning, including diagnostic testing, is a range of formal and informal assessment procedures conducted by teachers during the learning process in order to modify teaching and learning activities to improve student attainment. The goal of a formative assessment is to monitor student learning to provide ongoing feedback that can help students identify their strengths and weaknesses and target areas that need work. It also helps faculty recognize where students are struggling and address problems immediately. It typically involves qualitative feedback for both student and teacher that focuses on the details of content and performance. It is commonly contrasted with summative assessment, which seeks to monitor educational outcomes, often for purposes of external accountability.

<span class="mw-page-title-main">Cattell–Horn–Carroll theory</span> Psychological theory

The Cattell–Horn–Carroll theory, is a psychological theory on the structure of human cognitive abilities. Based on the work of three psychologists, Raymond B. Cattell, John L. Horn and John B. Carroll, the Cattell–Horn–Carroll theory is regarded as an important theory in the study of human intelligence. Based on a large body of research, spanning over 70 years, Carroll's Three Stratum theory was developed using the psychometric approach, the objective measurement of individual differences in abilities, and the application of factor analysis, a statistical technique which uncovers relationships between variables and the underlying structure of concepts such as 'intelligence'. The psychometric approach has consistently facilitated the development of reliable and valid measurement tools and continues to dominate the field of intelligence research.

Cognitive skills, also called cognitive functions, cognitive abilities or cognitive capacities, are skills of the mind, as opposed to other types of skills such as motor skills. Some examples of cognitive skills are literacy, self-reflection, logical reasoning, abstract thinking, critical thinking, introspection and mental arithmetic. Cognitive skills vary in processing complexity, and can range from more fundamental processes such as perception and various memory functions, to more sophisticated processes such as decision making, problem solving and metacognition.

Norman “Fritz” Frederiksen (1909-1998) was an American research psychologist and leading proponent of performance assessment, an approach to educational and occupational testing that focused on the use of tasks similar to the ones individuals actually encounter in real classroom and work environments. In keeping with the philosophy underlying this approach, Frederiksen was a critic of multiple-choice testing, which he felt negatively influenced school curricula and classroom practice. Much of his research centered upon creating and evaluating alternative approaches to the measurement of knowledge and skill, which he pursued over a 40-year career at Educational Testing Service (ETS) in Princeton, NJ. For his work, he received the American Psychological Association's Award for Distinguished Contributions to Knowledge in 1984 and, by the time of his retirement from ETS, had attained the position of Distinguished Scientist, the organization's highest-ranking scientific title at that time.

Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.

Automated essay scoring (AES) is the use of specialized computer programs to assign grades to essays written in an educational setting. It is a form of educational assessment and an application of natural language processing. Its objective is to classify a large set of textual entities into a small number of discrete categories, corresponding to the possible grades, for example, the numbers 1 to 6. Therefore, it can be considered a problem of statistical classification.

ACT, Inc. is an American 501(c)(3) nonprofit organization, primarily known for the ACT, a standardized test designed to assess high school students' academic achievement and college readiness. For the U.S. high school graduating class of 2019, 52 percent of graduates had taken the ACT test; the more than 1.78 million students included virtually all high school graduates in 17 states.

<span class="mw-page-title-main">Michael J Hannafin</span>

Michael J. Hannafin was professor of instructional technology and director of Learning and Performance Support Laboratory at the University of Georgia. He obtained a Ph.D. in educational technology from the Arizona State University. Along with Kyle Peck, he developed the field of computer-aided instruction as distinguished from computer-based instruction. He received the AERA SIG- IT Best Paper Award in 2007.

The National Council on Measurement in Education (NCME) is a U.S. based professional organization for assessment, evaluation, testing, and other aspects of educational measurement. NCME was launched in 1938 and previously operated under the name National Council on Measurements Used in Education.

Lynn Fuchs is an educational psychologist known for research on instructional practice and assessment, reading disabilities, and mathematics disabilities. She is the Dunn Family Chair in Psychoeducational Assessment in the Department of Special Education at Vanderbilt University.

Alina Anca von Davier is a psychometrician and researcher in computational psychometrics, machine learning, and education. Von Davier is a researcher, innovator, and an executive leader with over 20 years of experience in EdTech and in the assessment industry. She is the Chief of Assessment at Duolingo, where she leads the Duolingo English Test research and development area. She is also the Founder and CEO of EdAstra Tech, a service-oriented EdTech company. In 2022, she joined the University of Oxford as an Honorary Research Fellow, and Carnegie Mellon University as a Senior Research Fellow.

Mark Daniel Reckase is an educational psychologist and expert on quantitative methods and measurement who is known for his work on computerized adaptive testing, multidimensional item response theory, and standard setting in educational and psychological tests. Reckase is University Distinguished Professor Emeritus in the College of Education at Michigan State University.

Jacqueline P. Leighton is a Canadian-Chilean educational psychologist, academic and author. She is a full professor in the Faculty of Education as well as vice-dean of Faculty Development and Faculty Affairs at the University of Alberta.

References

  1. Levine, J. "Honoring the Very Best: Recognition for a Stellar Group of TC Alumni". Teachers College, Columbia University. Retrieved August 18, 2020.
  2. 1 2 "2017 AERA Fellows". American Educational Research Association. Retrieved August 18, 2020.
  3. "Bradley Hanson Award for Contributions to Educational Measurement Recipients Announced". National Council on Measurement in Education. Retrieved August 20, 2020.
  4. 1 2 "E.F. Lindquist Award: 2020 Award Recipient". American Educational Research Association. Retrieved August 18, 2020.
  5. "Seventeen Scholars Elected to Membership in the National Academy of Education". National Academy of Education. 28 January 2022. Retrieved January 28, 2022.
  6. "Current Award: 2024 Outstanding Contribution to Research in Cognition and Assessment". American Educational Research Association. 8 April 2024. Retrieved April 8, 2024.
  7. Bennett, R.E. "Reinventing Assessment: Speculations on the Future of Large-Scale Educational Testing". Educational Testing Service.
  8. Rubenstein, G. (March 18, 2008). "Ending Hit-and-Run Testing: ETS Sets Out to Revolutionize Assessment". Edutopia.
  9. Ash, K. (March 14, 2011). "Tailoring Testing with Digital Tools". Education Week, 30(25). pp. 35, 37.
  10. Bennett, R.E.; Gitomer, D.H. (2009). "Transforming K-12 assessment: Integrating accountability testing, formative assessment, and professional support. In C. Wyatt-Smith & J. Cumming (Eds.), Educational assessment in the 21st century". New York: Springer. pp. 43–61.
  11. Bennett, R.E. (2010). "Cognitively based assessment of, for, and as learning: A preliminary theory of action for summative and formative assessment". Measurement: Interdisciplinary Research and Perspectives, 8. pp. 70–91.
  12. NCME (July 26, 2018). "National Council on Measurement in Education (NCME) Position Statement on Theories of Action for Testing Programs" (PDF). NCME.
  13. Chalhoub-Deville, M. (2016). "Validity theory: Reform policies, accountability testing, and consequences". Language Testing. 33 (4). Language Testing, 33(4): 453–472. doi:10.1177/0265532215593312. S2CID   152167855.
  14. Bennett, R.E. (2011). "Formative Assessment: A Critical Review". Assessment in Education: Principles, Policy & Practice. 18. Assessment in Education: Principles, Policy and Practice, 18: 5–25. doi:10.1080/0969594X.2010.513678. S2CID   14804319.
  15. Sawchuk, S. (May 21, 2009). "Has the Research on Formative Assessment Been Oversold?". Education Week Teacher Beat.
  16. Baird, J.; Hopfenbeck, T.N.; Newton, P.; Stobart, G.; Steen-Utheim, A.T. State of the Field Review: Assessment and Learning (PDF). Norwegian Knowledge Centre for Education.
  17. Heritage, M.; Wiley, E.C. (2020). Formative Assessment in the Disciplines: Framing a Continuum of Professional Learning. Cambridge, MA: Harvard Education Press. pp. 15–47.
  18. Nishizuka, K. (2020). "A Critical Review of Formative Assessment Research and Practice in Japan". International Journal of Curriculum Development and Practice. pp. 15–47.
  19. Sandene, B.; Horkay, N.; Bennett, R.E.; Allen, N.; Braswell, J.; Kaplan, B.; Oranje, A. (2005). Online Assessment in Mathematics and Writing: Reports From the NAEP Technology-Based Assessment Project, Research and Development Series. Washington, D.C. IES. Retrieved August 18, 2020.
  20. Bennett, R.E.; Persky, H.; Weiss, A.R.; Jenkins, F. (2007). Problem Solving in Technology-Rich Environments: A Report From the NAEP Technology-Based Assessment Project. Washington, D.C. IES. Retrieved August 18, 2020.
  21. Cavanagh, S. (August 17, 2007). "Computerized Tests Measure Problem-Solving". Education Week.
  22. Tucker, B. (November 2009). "The Next Generation of Testing". Education Leadership, 67(3). pp. 48–53.
  23. Bennett, R.E.; Zhang, M. (2016). "Validity and automated scoring. In F. Drasgow (Ed.), Technology and testing: Improving educational and psychological measurement". New York: Routledge. pp. 142–173.
  24. Bennett, R.E. (2015). "The Changing Nature of Educational Assessment". Review of Research in Education. 39. Review of Research in Education, 39: 370–407. doi:10.3102/0091732X14554179. S2CID   145592665.
  25. Bennett, R.E. (2022). "The Good Side of COVID-19". Educational Measurement: Issues and Practice. 41: 61–63. doi:10.1111/emip.12496. S2CID   246588079.
  26. Bennett, R.E. (2023). "Toward a Theory of Socioculturally Responsive Assessment". Educational Assessment. 28 (2): 83–104. doi: 10.1080/10627197.2023.2202312 .
  27. Bennett, R.E. (2023). "Let's Agree to (Mostly) Agree: A Response to Solano-Flores". Educational Assessment. 28 (2): 122–127. doi:10.1080/10627197.2023.2215978. S2CID   258933453.

Randy E. Bennett publications indexed by Google Scholar.