Rubric (academic)

Last updated

In the realm of US education, a rubric is a "scoring guide used to evaluate the quality of students' constructed responses" according to James Popham. [1] In simpler terms, it serves as a set of criteria for grading assignments. Typically presented in table format, rubrics contain evaluative criteria, quality definitions for various levels of achievement, and a scoring strategy. [1] They play a dual role for teachers in marking assignments and for students in planning their work. [2]

Contents

Components of a scoring rubric

A scoring rubric typically includes dimensions or "criteria" on which performance is rated, definitions and examples illustrating measured attributes, and a rating scale for each dimension. Joan Herman, Aschbacher, and Winters identify these elements in scoring rubrics: [3] - Traits or dimensions serving as the basis for judging the student response - Definitions and examples clarifying each trait or dimension - A scale of values for rating each dimension - Standards of excellence for specified performance levels with models or examples

Types

Rubrics can be classified as holistic, analytic, or developmental. Holistic rubrics provide an overall rating for a piece of work, considering all aspects. Analytic rubrics evaluate various dimensions or components separately. Developmental rubrics, a subset of analytical rubrics, facilitate assessment, instructional design, and transformative learning through multiple dimensions of developmental successions.

Steps to create a scoring rubric

To create an effective scoring rubric, a five-step method is often employed: [4]

  1. Model Review: Provide students with sample assignments of varying quality for analysis.
  2. Criteria Listing: Collaboratively list criteria for the scoring rubric, incorporating student feedback.
  3. Quality Gradations: Define hierarchical categories describing levels of quality or development.
  4. Practice on Models: Allow students to apply rubrics to sample assignments for a deeper understanding.
  5. Self and Peer Assessment: Introduce self and peer-assessment to reinforce learning.

When to use scoring rubrics

Scoring rubrics find application in individual assessments, projects, and capstone projects. They prove particularly beneficial when multiple evaluators are assessing to maintain focus on contributing attributes. Rubrics are ideal for project assessments, providing criteria for various components.

Developmental rubrics

Developmental rubrics, a subtype of analytic rubrics, utilize multiple dimensions of developmental successions for assessment, instructional design, and transformative learning. They define modes of practice within a community of experts and indicate transformative learning through dynamic succession.

Defining developmental rubrics

Developmental rubrics refer to a matrix of modes of practice. Practices belong to a community of experts. [5] Each mode of practice competes with a few others within the same dimension. Modes appear in succession because their frequency is determined by four parameters: endemicity, performance rate, commitment strength, and acceptance. Transformative learning results in changing from one mode to the next. The typical developmental modes can be roughly identified as beginning, exploring, sustaining, and inspiring. The timing of the four levels is unique to each dimension and it is common to find beginning or exploring modes in one dimension coexisting with sustaining or inspiring modes in another. Often, the modes within a dimension are given unique names in addition to the typical identifier. As a result, developmental rubrics have four properties:

  1. They are descriptions of examples of behaviors.
  2. They contain multiple dimensions each consisting of a few modes of practice that cannot be used simultaneously with other modes in the dimension.
  3. The modes of practice within a dimension show a dynamic succession of levels.
  4. They can be created for extremely diverse scales of times & places.

Creating developmental rubrics

  1. Since practices belong to a community, the first step is to locate a group of practitioners, who are expert in their field and experienced with learners.
  2. Next, each practitioner works with an expert developmental interviewer to create a matrix that best reflects their experiences. Once several interviews have been completed they can be combined within a single set of developmental rubrics for the community through individual or computerized text analysis.
  3. Third, the community of experts rate learner performances and meet to compare ratings of the same performances and revise the definitions when multiple interpretations are discovered.
  4. Fourth, instructors of particular courses share the developmental rubrics with students and identify the target modes of practice for the course. Typically, a course targets only a fraction of the dimensions of the community's developmental rubrics and only one mode of practice within each of the target dimensions.
  5. Finally, the rubrics are used real-time to motivate student development, usually focusing on one dimension at a time and discussing the opportunities to perform at the next mode of practice in succession.

Etymology and history

The term "rubric" traditionally referred to instructions on a test or a heading on a document. In modern education, it has evolved to denote an assessment tool linked to learning objectives. The transition from medicine to education occurred through the construction of "Standardized Developmental Ratings" in the mid-1970s, later adapted for writing assessment.

Technical aspects

Scoring rubrics enhance scoring consistency, providing educators with a reliable grading tool. Grading is more consistent when using a rubric, reducing variation between students and different teachers.

See also

Related Research Articles

<span class="mw-page-title-main">Standardized test</span> Test administered and scored in a predetermined, standard manner

Standardized test is a test that is administered and scored in a consistent, or "standard", manner. Standardized tests are designed in such a way that the questions and interpretations are consistent and are administered and scored in a predetermined, standard manner.

Educational assessment or educational evaluation is the systematic process of documenting and using empirical data on the knowledge, skill, attitudes, aptitude and beliefs to refine programs and improve student learning. Assessment data can be obtained from directly examining student work to assess the achievement of learning outcomes or can be based on data from which one can make inferences about learning. Assessment is often used interchangeably with test, but not limited to tests. Assessment can focus on the individual learner, the learning community, a course, an academic program, the institution, or the educational system as a whole. The word "assessment" came into use in an educational context after the Second World War.

In education, Response to Intervention is an approach to academic intervention used to provide early, systematic, and appropriately intensive assistance to children who are at risk for or already underperforming as compared to appropriate grade- or age-level standards. RTI seeks to promote academic success through universal screening, early intervention, frequent progress monitoring, and increasingly intensive research-based instruction or interventions for children who continue to have difficulty. RTI is a multileveled approach for aiding students that is adjusted and modified as needed if they are failing.

In an educational setting, standards-based assessment is assessment that relies on the evaluation of student understanding with respect to agreed-upon standards, also known as "outcomes". The standards set the criteria for the successful demonstration of the understanding of a concept or skill.

Holistic grading or holistic scoring, in standards-based education, is an approach to scoring essays using a simple grading structure that bases a grade on a paper's overall quality. This type of grading, which is also described as nonreductionist grading, contrasts with analytic grading, which takes more factors into account when assigning a grade. Holistic grading can also be used to assess classroom-based work. Rather than counting errors, a paper is judged holistically and often compared to an anchor paper to evaluate if it meets a writing standard. It differs from other methods of scoring written discourse in two basic ways. It treats the composition as a whole, not assigning separate values to different parts of the writing. And it uses two or more raters, with the final score derived from their independent scores. Holistic scoring has gone by other names: "non-analytic," "overall quality," "general merit," "general impression," "rapid impression." Although the value and validation of the system are a matter of debate, holistic scoring of writing is still in wide application.

Formative assessment, formative evaluation, formative feedback, or assessment for learning, including diagnostic testing, is a range of formal and informal assessment procedures conducted by teachers during the learning process in order to modify teaching and learning activities to improve student attainment. The goal of a formative assessment is to monitor student learning to provide ongoing feedback that can help students identify their strengths and weaknesses and target areas that need work. It also helps faculty recognize where students are struggling and address problems immediately. It typically involves qualitative feedback for both student and teacher that focuses on the details of content and performance. It is commonly contrasted with summative assessment, which seeks to monitor educational outcomes, often for purposes of external accountability.

Authentic assessment is the measurement of "intellectual accomplishments that are worthwhile, significant, and meaningful" Authentic assessment can be devised by the teacher, or in collaboration with the student by engaging student voice. When applying authentic assessment to student learning and achievement, a teacher applies criteria related to “construction of knowledge, disciplined inquiry, and the value of achievement beyond the school.”

An anchor paper is a sample essay response to an assignment or test question requiring an essay, primarily in an educational effort. Unlike more traditional educational assessments such as multiple choice, essays cannot be graded with an answer key, as no strictly correct or incorrect solution exists. The anchor paper provides an example to the person reviewing or grading the assignment of a well-written response to the essay prompt. Sometimes examiners prepare a range of anchor papers, to provide examples of responses at different levels of merit.

Transformative assessment is a form of assessment that uses “institution-wide assessment strategies that are based on institutional goals and implemented in an integrated way for all levels to systematically transform teaching and learning.” Transformative assessment is focused on the quality of the assessment instruments and how well the assessment measures achieving of a goal. "The classic approach is to say, if you want more of something, measure it"

Assessment in computer-supported collaborative learning (CSCL) environments is a subject of interest to educators and researchers. The assessment tools utilized in computer-supported collaborative learning settings are used to measure groups' knowledge learning processes, the quality of groups' products and individuals' collaborative learning skills.

Corrective feedback is a frequent practice in the field of learning and achievement. It typically involves a learner receiving either formal or informal feedback on their understanding or performance on various tasks by an agent such as teacher, employer or peer(s). To successfully deliver corrective feedback, it needs to be nonevaluative, supportive, timely, and specific.

Peer assessment, or self-assessment, is a process whereby students or their peers grade assignments or tests based on a teacher's benchmarks. The practice is employed to save teachers time and improve students' understanding of course materials as well as improve their metacognitive skills. Rubrics are often used in conjunction with self- and peer-assessment.

<span class="mw-page-title-main">Evidence-based education</span> Paradigm of the education field

Evidence-based education (EBE) is the principle that education practices should be based on the best available scientific evidence, with randomised trials as the gold standard of evidence, rather than tradition, personal judgement, or other influences. Evidence-based education is related to evidence-based teaching, evidence-based learning, and school effectiveness research.

Learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs. The growth of online learning since the 1990s, particularly in higher education, has contributed to the advancement of Learning Analytics as student data can be captured and made available for analysis. When learners use an LMS, social media, or similar online tools, their clicks, navigation patterns, time on task, social networks, information flow, and concept development through discussions can be tracked. The rapid development of massive open online courses (MOOCs) offers additional data for researchers to evaluate teaching and learning in online environments.

Teacher quality assessment commonly includes reviews of qualifications, tests of teacher knowledge, observations of practice, and measurements of student learning gains. Assessments of teacher quality are currently used for policymaking, employment and tenure decisions, teacher evaluations, merit pay awards, and as data to inform the professional growth of teachers.

<span class="mw-page-title-main">Open educational practices</span>

Open educational practices (OEP) are part of the broader open education landscape, including the openness movement in general. It is a term with multiple layers and dimensions and is often used interchangeably with open pedagogy or open practices. OEP represent teaching and learning techniques that draw upon open and participatory technologies and high-quality open educational resources (OER) in order to facilitate collaborative and flexible learning. Because OEP emerged from the study of OER, there is a strong connection between the two concepts. OEP, for example, often, but not always, involve the application of OER to the teaching and learning process. Open educational practices aim to take the focus beyond building further access to OER and consider how in practice, such resources support education and promote quality and innovation in teaching and learning. The focus in OEP is on reproduction/understanding, connecting information, application, competence, and responsibility rather than the availability of good resources. OEP is a broad concept which can be characterised by a range of collaborative pedagogical practices that include the use, reuse, and creation of OER and that often employ social and participatory technologies for interaction, peer-learning, knowledge creation and sharing, empowerment of learners, and open sharing of teaching practices.

<span class="mw-page-title-main">National Center for Assessment in Higher Education</span>

Measurement is derived from the verb 'to measure' which means to assess something; in Arabic 'yaqees' 'measure' has the meaning of comparing something to something else. In this sense, measurement is a daily practice that manifests itself in all our assessment activities, whether we assess concrete things in terms of size and color, or abstract things such as human relations. The ultimate goal of 'measuring' something is to assess ourselves in comparison to everything else in the world.

Learning development describes work with students and staff to develop academic practices, with a main focus on students developing academic practices in higher education, which assess the progress of knowledge acquired by the means of structural approaches. Learning developers are academic professionals who: teach, advise and facilitate students to develop their academic practices; create academic development learning resources; and reflect on their own academic practices through a community of practice.

Writing assessment refers to an area of study that contains theories and practices that guide the evaluation of a writer's performance or potential through a writing task. Writing assessment can be considered a combination of scholarship from composition studies and measurement theory within educational assessment. Writing assessment can also refer to the technologies and practices used to evaluate student writing and learning. An important consequence of writing assessment is that the type and manner of assessment may impact writing instruction, with consequences for the character and quality of that instruction.

The Framework for Authentic Intellectual Work (AIW) is an evaluative tool used by educators of all subjects at the elementary and secondary levels to assess the quality of classroom instruction, assignments, and student work. The framework was founded by Dr. Dana L. Carmichael, Dr. M. Bruce King, and Dr. Fred M. Newmann. The purpose of the framework is to promote student production of genuine and rigorous work that resembles the complex work of adults, which identifies three main criteria for student learning, and provides standards accompanied by scaled rubrics for classroom instruction, assignments, and student work. The standards and rubrics are meant to support teachers in the promotion of genuine and rigorous work, as well as guide professional development and collaboration.

References

  1. 1 2 Popham, James (October 1997). "What's Wrong - and What's Right - with Rubrics". Educational Leadership. 55 (2): 72–75.
  2. Dawson, Phillip (December 2015). "Assessment rubrics: towards clearer and more replicable design, research and practice". Assessment & Evaluation in Higher Education. 42 (3): 347–360. CiteSeerX   10.1.1.703.8431 . doi:10.1080/02602938.2015.1111294. S2CID   146330707.
  3. Herman, Joan (January 1992). A Practical Guide to Alternative Assessment . Association for Supervision & Curriculum Development. ISBN   978-0871201973.
  4. Goodrich, H. (1996). "Understanding Rubrics." Educational Leadership, 54 (4), 14-18.
  5. Wenger, E., McDermott, R. & Snyder, W. M. (2002). "Cultivating Communities of Practice." Boston, MA: Harvard Business School Press.

Further reading