Curriculum-based measurement, or CBM, is also referred to as a general outcomes measures (GOMs) of a student's performance in either basic skills or content knowledge.
CBM began in the mid-1970s with research headed by Stan Deno at the University of Minnesota. [1] Over the course of 10 years, this work led to the establishment of measurement systems in reading, writing, and spelling that were: (a) easy to construct, (b) brief in administration and scoring, (c) had technical adequacy (reliability and various types of validity evidence for use in making educational decisions), and (d) provided alternate forms to allow time series data to be collected on student progress. [2] This focus in the three language arts areas eventually was expanded to include mathematics, though the technical research in this area continues to lag that published in the language arts areas. An even later development was the application of CBM to middle-secondary areas: Espin and colleagues at the University of Minnesota developed a line of research addressing vocabulary and comprehension (with the maze) and by Tindal and colleagues at the University of Oregon developed a line of research on concept-based teaching and learning. [3]
Early research on the CBM quickly moved from monitoring student progress to its use in screening, normative decision-making, and finally benchmarking. Indeed, with the implementation of the No Child Left Behind Act in 2001, and its focus on large-scale testing and accountability, CBM has become increasingly important as a form of standardized measurement that is highly related to and relevant for understanding student's progress toward and achievement of state standards.
Probably the key feature of CBM is its accessibility for classroom application and implementation. It was designed to provide an experimental analysis of the effects from interventions, which includes both instruction and curriculum. This is one of the most important conundrums to surface on CBM: To evaluate the effects of a curriculum, a measurement system needs to provide an independent "audit" and not be biased to only that which is taught. The early struggles in this arena referred to this difference as mastery monitoring (curriculum-based which was embedded in the curriculum and therefore forced the metric to be the number (and rate) of units traversed in learning) versus experimental analysis which relied on metrics like oral reading fluency (words read correctly per minute) and correct word or letter sequences per minute (in writing or spelling), both of which can serve as GOMs. In mathematics, the metric is often digits correct per minute. N.B. The metric of CBM is typically rate-based to focus on "automaticity" in learning basic skills. [4]
The most recent advancements of CBM have occurred in three areas. First, they have been applied to students with low incidence disabilities. This work is best represented by Zigmond in the Pennsylvania Alternate Assessment and Tindal in the Oregon and Alaska Alternate Assessments. The second advancement is the use of generalizability theory with CBM, best represented by the work of John Hintze, in which the focus is parceling the error term into components of time, grade, setting, task, etc. Finally, Yovanoff, Tindal, and colleagues at the University of Oregon have applied Item Response Theory (IRT) to the development of statistically calibrated equivalent forms in their progress monitoring system. [5]
Curriculum-based measurement emerged from behavioral psychology and yet several behaviorists have become disenchanted with the lack of the dynamics of the process. [6] [7]
CBM may refer to:
Readability is the ease with which a reader can understand a written text. In natural language, the readability of text depends on its content and its presentation. Researchers have used various factors to measure readability, such as:
DIBELS is a series of short tests that assess K-8 literacy.
The IEA's Progress in International Reading Literacy Study (PIRLS) is an international study of reading (comprehension) achievement in fourth graders. It has been conducted every five years since 2001 by the International Association for the Evaluation of Educational Achievement (IEA). It is designed to measure children's reading literacy achievement, to provide a baseline for future studies of trends in achievement, and to gather information about children's home and school experiences in learning to read.
Project STAR was three-year, federally funded research project which consisted of an intervention with preschoolers enrolled in the Head Start program in Lane County, Oregon, United States. The project was conducted from 1999 to 2003 by the Early Childhood Research Unit of the University of Oregon College of Education. The principal investigators were Dr. Ruth Kaminski, one of the co-authors of the DIBELS early literacy assessment, and Beth Stormshak. The goal of the program was to increase literacy skills of at-risk children by improving their learning environments by increasing the number of planned and focused activities. The curriculum had two components: a classroom ecology component and family-focused intervention activities. The intervention was focused on strengthening children's skills in social ability. In order to help children they increased parenting and family participation in school by working directly with the parents of the students.
In education, Response to Intervention is an approach to academic intervention used in the United States to provide early, systematic, and appropriately intensive assistance to children who are at risk for or already underperforming as compared to appropriate grade- or age-level standards. RTI seeks to promote academic success through universal screening, early intervention, frequent progress monitoring, and increasingly intensive research-based instruction or interventions for children who continue to have difficulty. RTI is a multileveled approach for aiding students that is adjusted and modified as needed if they are failing.
STAR Reading, STAR Early Literacy and STAR Math are standardized, computer-adaptive assessments created by Renaissance Learning, Inc., for use in K-12 education. Each is a "Tier 2" assessment of a skill that can be used any number of times due to item-bank technology. These assessments fall somewhere between progress monitoring tools and high-stakes tests.
The Australian Council for Educational Research (ACER), established in 1930, is an independent educational research organisation based in Camberwell, Victoria (Melbourne) and with offices in Adelaide, Brisbane, Dubai, Jakarta, Kuala Lumpur, London, New Delhi, Perth and Sydney. ACER develops and manages a range of testing and assessment services and conducts research and analysis in the education sector.
The assessment of basic language and learning skills is an educational tool used frequently with applied behavior analysis (ABA) to measure the basic linguistic and functional skills of an individual with developmental delays or disabilities.
Positive behavior support (PBS) is a form of applied behavior analysis that uses a behavior management system to understand what maintains an individual's challenging behavior and how to change it. People's inappropriate behaviors are difficult to change because they are functional; they serve a purpose for them. These behaviors may be supported by reinforcement in the environment. People may inadvertantly reinforce undesired behaviors by providing objects and/or attention because of the behavior.
Direct Instruction (DI) is a term for the explicit teaching of a skill-set using lectures or demonstrations of the material to students. A particular subset of direct instruction, denoted by capitalization as Direct Instruction, refers to a specific example of the approach developed by Siegfried Engelmann and Wesley C. Becker. DI teaches by explicit instruction, in contrast to exploratory models such as inquiry-based learning. DI includes tutorials, participatory laboratory classes, discussion, recitation, seminars, workshops, observation, active learning, practica, or internships. Model includes "I do" (instructor), "We do", "You do".
Reciprocal teaching is an instructional activity that takes the form of a dialogue between teachers and students regarding segments of text for the purpose of constructing the meaning of text. Reciprocal teaching is a reading technique which is thought to promote students' reading comprehension. A reciprocal approach provides students with four specific reading strategies that are actively and consciously used to support comprehension: Questioning, Clarifying, Summarizing, and Predicting. Palincsar (1986) believes the purpose of reciprocal teaching is to facilitate a group effort between teacher and students as well as among students in the task of bringing meaning to the text.
Reciprocal teaching is best represented as a dialogue between teachers and students in which participants take turns assuming the role of teacher. -Annemarie Sullivan Palincsar
Mary Nacol Meeker (1921–2003), was an American educational psychologist and entrepreneur. She is best known for her applying J. P. Guilford's Structure of Intellect theory of human intelligence to the field of education.
The professional practice of behavior analysis is a domain of behavior analysis, the others being radical behaviorism, experimental analysis of behavior and applied behavior analysis. The practice of behavior analysis is the delivery of interventions to consumers that are guided by the principles of radical behaviorism and the research of both experimental and applied behavior analysis. Professional practice seeks to change specific behavior through the implementation of these principles. In many states, practicing behavior analysts hold a license, certificate, or registration. In other states, there are no laws governing their practice and, as such, the practice may be prohibited as falling under the practice definition of other mental health professionals. This is rapidly changing as behavior analysts are becoming more and more common.
Functional analysis in behavioral psychology is the application of the laws of operant and respondent conditioning to establish the relationships between stimuli and responses. To establish the function of operant behavior, one typically examines the "four-term contingency": first by identifying the motivating operations, then identifying the antecedent or trigger of the behavior, identifying the behavior itself as it has been operationalized, and identifying the consequence of the behavior which continues to maintain it.
The Lexile Framework for Reading is an educational tool that uses a measure called a Lexile to match readers with books, articles and other leveled reading resources. Readers and books are assigned a score on the Lexile scale, in which lower scores reflect easier readability for books and lower reading ability for readers. The Lexile framework uses quantitative methods, based on individual words and sentence lengths, rather than qualitative analysis of content to produce scores. Accordingly, the scores for texts do not reflect factors such as multiple levels of meaning or maturity of themes. Hence, the United States Common Core State Standards recommend the use of alternative, qualitative methods for selecting books for students at grade 6 and over. In the US, Lexile measures are reported from reading programs and assessments annually. Thus, about half of U.S. students in grades 3rd through 12th receive a Lexile measure each year. In addition to being used in schools in all 50 states, Lexile measures are also used outside of the United States.
The Verbal Behavior Milestones Assessment and Placement Program (VB-MAPP) is an assessment and skills-tracking system to assess the language, learning and social skills of children with autism or other developmental disabilities. A strong focus of the VB-MAPP is language and social interaction, which are the predominant areas of weakness in children with autism.
Seductive details are often used in textbooks, lectures, slideshows, and other forms of educational content to make a course more interesting or interactive. Seductive details can take the form of text, animations, photos, illustrations, sounds or music and are by definition: (1) interesting and (2) not directed toward the learning objectives of a lesson. John Dewey, in 1913, first referred to this as "fictitious inducements to attention." While illustrated text can enhance comprehension, illustrations that are not relevant can lead to poor learning outcomes. Since the late 1980s, many studies in the field of educational psychology have shown that the addition of seductive details results in poorer retention of information and transfer of learning. Thalheimer conducted a meta-analysis that found, overall, a negative impact for the inclusion of seductive details such as text, photos or illustrations, and sounds or music in learning content. More recently, a 2020 paper found a similar effect for decorative animations This reduction to learning is called the seductive details effect. There have been criticisms of this theory. Critics cite unconvincing and contradictory evidence to argue that seductive details do not always impede understanding and that seductive details can sometimes be motivating for learners.
Data-driven instruction is an educational approach that relies on information to inform teaching and learning. The idea refers to a method teachers use to improve instruction by looking at the information they have about their students. It takes place within the classroom, compared to data-driven decision making. Data-driven instruction works on two levels. One, it provides teachers the ability to be more responsive to students’ needs, and two, it allows students to be in charge of their own learning. Data-driven instruction can be understood through examination of its history, how it is used in the classroom, its attributes, and examples from teachers using this process.
Lynn Fuchs is an educational psychologist known for research on instructional practice and assessment, reading disabilities, and mathematics disabilities. She is the Dunn Family Chair in Psychoeducational Assessment in the Department of Special Education at Vanderbilt University.