Data-driven instruction

Last updated

Data-driven instruction is an educational approach that relies on information to inform teaching and learning. The idea refers to a method teachers use to improve instruction by looking at the information they have about their students. It takes place within the classroom, compared to data-driven decision making. Data-driven instruction works on two levels. One, it provides teachers the ability to be more responsive to students’ needs, and two, it allows students to be in charge of their own learning. Data-driven instruction can be understood through examination of its history, how it is used in the classroom, its attributes, and examples from teachers using this process.

Contents

History

Prior to the current emphasis on data and accountability in schools, some school leaders and education researchers focused on standards-based reform in education. From the idea of creating standards comes accountability, the idea that schools should report on their ability to meet the designated standards. [1] Late in the last century and in the early 2000s, an increased emphasis on accountability in public organizations made its way into the realm of education. With the passing of the No Child Left Behind (NCLB) Act in 2001 came laws requiring schools to provide information to the public concerning the quality of education provided to students. To be able to provide such data, states were mandated to create accountability measures and yearly assessments to gauge the effectiveness of schools in meeting those measures. [2] [3] Following NCLB, more recent legislation under the Race to the Top Act further pushed states to use data gathering and reporting to demonstrate school’s ability to meet the demands of the public. Embedded in both NCLB and the Race to the Top Act is an assumption that the collection and use of data can lead to increased student performance. [4]

Attributes

Data in the classroom is any information that is visible during instruction that could be used to inform teaching and learning. Types of data include quantitative and qualitative data, although quantitative data is most often used for data-driven instruction. Examples of quantitative data include test scores, results on a quiz, and levels of performance on a periodic assessment. [5] Examples of qualitative data include field notes, student work/artifacts, interviews, focus groups, digital pictures, video, reflective journals. [6]

Quantitative and qualitative data is generally captured through two forms of assessments: formative and summative. Formative assessment is the information that is revealed and shared during instruction and is actionable by the teacher or student. [7] Paul Black and Dylan Wiliam offer examples of classroom assessment that is formative in nature, including student observations and discussions, understand pupils’ needs and challenges, and looking at student work. [7] Conversely, summative assessments are designed to determine whether or not a student can transfer their learning to new contexts, as well as for accountability purposes. [7] Formative assessment is the use of information made evident during instruction in order to improve student progress and performance. Summative assessments occur after teaching and learning occurred.

Examples

Understanding the differences between quantitative data vs. qualitative data, as well as formative assessment vs. summative assessment that tease out this data can be defined as assessment literacy. [5] Building assessment literacy also includes knowing when to use which type of assessment and the resulting data to use to inform instruction. The purpose of data driven instruction is to use information to guide teaching and learning. Dylan Wiliam offers examples of data driven instruction using formative assessment:

Because of the lack of timely feedback regarding the results plus the inability to personalize the approach, summative assessments are not readily used for data driven instruction in the classroom. Instead, a variety of information gleaned from different forms of assessments should be used to make decisions about student progress and performance within data-driven instruction. The use of multiple measures of different forms and at different times to make instructional decisions is referred to as triangulation. [5]

Data-Driven Instructional Systems

Background and origins

Data-Driven Instructional Systems refers to a comprehensive system of structures that school leaders and teachers design in order to incorporate the data into their instructions. [9] Building on organizational and school change literature, Richard Halverson, Jeffrey Grigg, Reid Prichett, and Chris Thomas developed a DDIS framework in an attempt to describe how relevant actors manage school-level internal accountability to external accountability. [9] Specifically, high-stakes external accountability policies such as No Child Left Behind Act (NCLB) was implemented to hold schools accountable for the reported standardized, summative assessment metrics. However, schools already had active internal accountability systems that place high emphasis on an ongoing cycle of instructional improvement based on the use of data including formative assessment results and behavioral information. Therefore, when the high-stakes accountability was implemented, schools naturally go through process of alignment between different types of data different purposes and the corresponding tension. Richard Halverson and his colleagues, employing case study approaches, explore leaders’ effort of coordination and alignment process which occurs between extant “central practices and cultures of schools” and “new accountability pressure” in a pursuit of improving student achievement score. [9]

Key concepts

In their article, Richard Halverson, Jeffrey Grigg, Reid Prichett, and Chris Thomas suggest that the DDIS framework is composed of six organizational functions: data acquisition; data reflection; program alignment; program design; formative feedback; test preparation. [9] [10]

Data Acquisition

Data acquisition includes the data collection, data storage, and data reporting functions. “Data” in DDIS model is broadly conceptualized as any type of information that guides teaching and learning. In practice, schools collect academic data standardized assessment test scores, as well as non-academic data like student demographic information, community survey data, curricula, technological capacity, and behavioral records. In order to store such data, some schools develop their own local collection strategies using low-tech printouts and notebooks, whereas other schools rely on high-tech district storage systems, which provide tremendous amounts of reports. School leaders have discussions around which data needs to be reported and how to report the data in a way that they can use to guide teaching practices.

Data Reflection

In the DDIS model, data reflection refers to collectively making sense of the reported data. [9] District-level data retreats provide key opportunities for the schools within districts to identify the school-level strengths and weaknesses in terms of achievement data. Retreats help districts to develop district-level visions for instruction. In contrast, through local data reflection meetings, teachers have conversations focused on the individual students’ progress by examining each student’s performance on the assessed standards.

Program Alignment

Richard Halverson and his colleagues states that program alignment function refers to “link[ing] the relevant content and performance standards with the actual content taught in classroom.” [9] For example, the benchmark assessment results, as “problem-finding tools,” help educators to identify the curricular standards that are not aligned well with the current instructional programs.

Program Design

After identifying the main areas in relation to students learning needs and school goals, leaders and teachers design interventions: faculty-based programs; curriculum-based programs; and student-based programs. In an effort to improve the faculty’s data literacy, educators are provided with a variety of professional development opportunities and coaching focused on professional interaction (faculty-based programs). In addition, educators modify their curriculum as a whole-classroom approach (curriculum-based programs) or develop customized instructional plans taking into account individual students’ needs (student-based programs).

Formative Feedback

Educators interact with each other around the formative feedback on the local interventions implemented across classrooms and programs. Formative feedback systems are made of three main components: intervention, assessment, and actuation. Intervention artifacts here include curriculum materials like textbooks and experiments, or programs such as individualized education programs (Intervention). The effect of these intervention artifacts can be evaluated through formative assessments, either commercial or self-created, from the perspective that they had brought intended changes to teaching and learning (Assessment). In the actuation space, educators interpret the assessment results in relation to the initial goals of the intervention, and discuss how to modify the instruction delivery or assessment as measurement tools, which lays groundwork for the new interventions (Actuation).

Test Preparation

This function is not intended for teachers to “teach to the test.” Rather, it points to the following activities: curriculum-embedded activities, test practice, environmental design, and community outreach. Teachers incorporate the content of standardized assessment into their day-to-day instructions (curriculum-embedded activities), assist students to practice or be accustomed to test-taking with similar types of tests (test practice), and establish a favorable test-taking environment (environmental design). Further, teachers communicate with parents and the community members on the topics ranging from test implementation to interpreting the test results (community outreach).

Implications

For school districts

The primary implication for school districts is in ensuring high quality and relevant data is gathered and available. Beyond creating systems to gather and share the data, the school district must provide the expertise, in the form of data expert personnel and/or the access to professional development resources to ensure school building leaders are able to access and use the data. [11]

Another critical component of the responsibility of the district is to provide the leadership and vision to promulgate the use of information about student performance to improve teaching practice. Zavadsky and Dolejs suggest two areas for school districts to consider:

“The first is data collection and analysis. Districts and schools must carefully consider what data they need to collect, develop instruments with which to collect the data, and make the data available as soon as possible. The second component is data use. Principals and district leaders must give teachers sufficient time and training to understand the data and learn how to respond to what the data reveal”. [12]

While the literature shows the vital importance of the role of the district in setting the stage for data driven instruction, more of the work of connecting student performance to classroom practices happens at the school and classroom level.

For schools

Schools have a major role in establishing the conditions for data-driven instruction to flourish. Heppen, et al. indicate a need for a clear and consistent focus on using data and a data-rich environment to support teachers’ efforts to use data to drive instruction. When the leadership creates and maintains an environment which promotes collaboration and clearly communicates the urgency to improve student learning, teachers feel supported to engage in data use. The additional scaffold of modeling the use of data at the school level increases teachers’ expertise in the use of data. [13]

For teachers

Data-driven instruction is created and implemented in the classroom. Teachers have the most direct link between student performance and classroom practices. Through the use of data, teachers can make decisions about what and how to teach including how to use time in class, interventions for students who are not meeting standards, customizing lessons based on real-time information, adapting teaching practice to align to student needs, and making changes to pace, scope and sequence. [14]

To be able to engage in data-driven instruction, teachers must first develop the knowledge, skills, and dispositions required. Working in a school culture and climate in which data-driven instruction is valued and supported, teachers have the ability to increase student achievement and potentially reduce the achievement gap. Additionally, teachers must have access to learning opportunities or professional development which helps them understand the pedagogical framework and technical skills required to obtain, analyze, and use information about students to make instructional decisions. [15]

For students

A significant new growth in data-driven instruction is in having students shape their lessons using data about their own progress. Younger learners who are able to self-report regarding grades and other assessments can experience high levels of achievement and progress within instruction. [16] To embed data analysis by students into classroom practices, it requires time, training, and action. [17] The strategies that students use to evaluate their own learning vary in effectiveness. In a meta-analysis, Dunlosky, Rawson, Marsh, Nathan & Willingham ranked ten learning strategies based on the projected impact each would have on achievement:

Highly Effective Strategies:

Moderately Effective Strategies:

Less Effective Strategies:

It is worth noting that the less effective strategies may be more commonly used in K-12 classrooms than the moderately effective and highly effective strategies. The authors suggest that students should be taught how to use more effective techniques and when they are most helpful in guiding their learning. When these strategies become internalized, students will have developed techniques in order to learn how to learn. This is critical as they move into the secondary level and are expected to be more independent in their studies.

Criticisms

A major criticism of data driven instruction is that it focuses too much on test scores, and that not enough attention is given to the results of classroom assessments. Data driven instruction should serve as a “road map through assessment” that helps “teachers plan instruction to meet students’ needs, leading to better achievement”. [19] Summative assessments should not be used to inform the day-to-day teaching and learning that is supported by data-driven instruction. Additional problems associated with perceptions of data driven instruction include the limitations of quantitative data to represent student learning, not considering the social and emotional needs or the context of the data when making instructional decisions, and a hyperfocus on the core areas of literacy and mathematics while ignoring the encore, traditionally high-interest areas such as the arts and humanities.

See also

Citations

  1. Elmore, Richard F. (2000). Building a New Structure for School Leadership. Albert Shanker Institute.
  2. Moriarty, Tammy Wu (May 2013). Data-driven decision making: Teachers' use of data in the classroom (Thesis). ProQuest   1432373944.
  3. Larocque, M (2007). "Closing the Achievement Gap: The Experience of a Middle School". Clearing House. 80 (4): 157–162. doi:10.3200/tchs.80.4.157-162. S2CID   145741309.
  4. Kennedy, Brianna L.; Datnow, Amanda (December 2011). "Student Involvement and Data-Driven Decision Making: Developing a New Typology". Youth & Society. 43 (4): 1246–1271. doi:10.1177/0044118X10388219. S2CID   145417758.
  5. 1 2 3 Boudett, K. P.; City, E. A.; Murname, R. J. (2013). Data Wise: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning. Cambridge, MA: Harvard Education Press.
  6. Dana, N. F.; Yendol-Hoppey, D. (2014). The Reflective Educator's Guide to Classroom Research: Learning to Teach and Teaching to Learn Through Practitioner Inquiry (3rd ed.). Thousand Oaks, CA: Corwin.
  7. 1 2 3 Black, P; Wiliam, D. (1998). "Inside the Black Box: Raising Standards Through Classroom Assessment". Phi Delta Kappan. 80 (2): 139–148.
  8. Wiliam, Dylan (2011). Embedded Formative Assessment. Bloomington, IN: Solution Tree.
  9. 1 2 3 4 5 6 Halverson, Richard; Grigg, Jeffrey; Prichett, Reid; Thomas, Chris (March 2007). "The New Instructional Leadership: Creating Data-Driven Instructional Systems in School" (PDF). Journal of School Leadership. 17 (2): 159–194. doi:10.1177/105268460701700202. S2CID   61185505. Archived from the original (PDF) on 2020-02-11.
  10. George Kocher Expert
  11. Swan, G.; Mazur, J. (2011). "Examining data driven decision making via formative assessment: A confluence of technology, data interpretation heuristics and curricular policy". Contemporary Issues in Technology and Teacher Education. 11 (2): 205.
  12. Zavadsky, H.; Dolejs, A. (2006). "DATA: Not Just Another Four-Letter Word". Principal Leadership, Middle Level Ed. 7 (2): 32–36.
  13. Heppen, Jessica; Faria, Ann-Marie; Thomsen, Kerri; Sawyer, Katherine; Townsend, Monika; Kutner, Melissa; Stachel, Suzanne; Lewis, Sharon; Casserly, Michael (December 2010). Using Data to Improve Instruction in the Great City Schools: Key Dimensions of Practice. Urban Data Study. Council of the Great City Schools.
  14. Hamilton et al. - 2009 - Using student achievement data to support instruct.pdf. (n.d.). Retrieved from http://files.eric.ed.gov/fulltext/ED506645.pdf
  15. Furlong-Gordon, Jean Marie (November 2009). Driving classroom instruction with data: From the district to the teachers to the classroom (Thesis). ProQuest   250914319.
  16. Hattie, J. (2012). Visible Learning for Teachers: Maximizing Impact on Learning. New York: Routledge.
  17. Depka, Eileen (March 29, 2019). Letting Data Lead: How to Design, Analyze, and Respond to Classroom Assessment. Bloomington, IN: Solution Tree. p. 106.
  18. Dunlosky, J.; Rawson, K. A.; Marsh, E. J.; Nathan, M. J.; Willingham, D. T. (2013). "Improving students' learning with effective learning techniques promising directions from cognitive and educational psychology". Psychological Science in the Public Interest. 14 (1): 4–58. doi:10.1177/1529100612453266. PMID   26173288. S2CID   1621081.
  19. Neuman, Susan B. (November 2016). "Code Red: The Danger of Data-Driven Instruction". Educational Leadership. 74 (3): 24–29.

Additional References

Related Research Articles

A teaching method is a set of principles and methods used by teachers to enable student learning. These strategies are determined partly on subject matter to be taught, partly by the relative expertise of the learners, and partly by constraints caused by the learning environment. For a particular teaching method to be appropriate and efficient it has take into account the learner, the nature of the subject matter, and the type of learning it is supposed to bring about.

<span class="mw-page-title-main">Student-centered learning</span> Methods of teaching

Student-centered learning, also known as learner-centered education, broadly encompasses methods of teaching that shift the focus of instruction from the teacher to the student. In original usage, student-centered learning aims to develop learner autonomy and independence by putting responsibility for the learning path in the hands of students by imparting to them skills, and the basis on how to learn a specific subject and schemata required to measure up to the specific performance requirement. Student-centered instruction focuses on skills and practices that enable lifelong learning and independent problem-solving. Student-centered learning theory and practice are based on the constructivist learning theory that emphasizes the learner's critical role in constructing meaning from new information and prior experience.

Educational assessment or educational evaluation is the systematic process of documenting and using empirical data on the knowledge, skill, attitudes, aptitude and beliefs to refine programs and improve student learning. Assessment data can be obtained from directly examining student work to assess the achievement of learning outcomes or can be based on data from which one can make inferences about learning. Assessment is often used interchangeably with test, but not limited to tests. Assessment can focus on the individual learner, the learning community, a course, an academic program, the institution, or the educational system as a whole. The word "assessment" came into use in an educational context after the Second World War.

Electronic assessment, also known as digital assessment, e-assessment, online assessment or computer-based assessment, is the use of information technology in assessment such as educational assessment, health assessment, psychiatric assessment, and psychological assessment. This covers a wide range of activities ranging from the use of a word processor for assignments to on-screen testing. Specific types of e-assessment include multiple choice, online/electronic submission, computerized adaptive testing such as the Frankfurt Adaptive Concentration Test, and computerized classification testing.

Mastery learning is an instructional strategy and educational philosophy, first formally proposed by Benjamin Bloom in 1968. Mastery learning maintains that students must achieve a level of mastery in prerequisite knowledge before moving forward to learn subsequent information. If a student does not achieve mastery on the test, they are given additional support in learning and reviewing the information and then tested again. This cycle continues until the learner accomplishes mastery, and they may then move on to the next stage. In a self-paced online learning environment, students study the material and take assessments. If they make mistakes, the system provides insightful explanations and directs them to revisit the relevant sections. They then answer different questions on the same material, and this cycle repeats until they reach the established mastery threshold. Only then can they move on to subsequent learning modules, assessments, or certifications.

In education, Response to Intervention is an approach used to provide early, systematic, and appropriately intensive supplemental instruction and academic support to children who are at risk for or already underperforming as compared to appropriate grade or age level standards. However, to better reflect the transition to a broader approach to intervention, there has been a shift in recent years from the terminology referring to RTI to MTSS, which stands for "Multi-Tiered System of Supports." MTSS represents the latest framework of support that is being implemented to systematically meet the wider needs which influence student learning and performance.

A course evaluation is a paper or electronic questionnaire, which requires a written or selected response answer to a series of questions in order to evaluate the instruction of a given course. The term may also refer to the completed survey form or a summary of responses to questionnaires.

<span class="mw-page-title-main">Summative assessment</span> Assessment used to determine student outcomes after an academic course

Summative assessment, summative evaluation, or assessment of learning is the assessment of participants in an educational program. Summative assessments are designed to both assess the effectiveness of the program and the learning of the participants. This contrasts with formative assessment, which summarizes the participants' development at a particular time in order to inform instructors of student learning progress.

Formative assessment, formative evaluation, formative feedback, or assessment for learning, including diagnostic testing, is a range of formal and informal assessment procedures conducted by teachers during the learning process in order to modify teaching and learning activities to improve student attainment. The goal of a formative assessment is to monitor student learning to provide ongoing feedback that can help students identify their strengths and weaknesses and target areas that need work. It also helps faculty recognize where students are struggling and address problems immediately. It typically involves qualitative feedback for both student and teacher that focuses on the details of content and performance. It is commonly contrasted with summative assessment, which seeks to monitor educational outcomes, often for purposes of external accountability.

<span class="mw-page-title-main">Evidence-based education</span> Paradigm of the education field

Evidence-based education (EBE) is the principle that education practices should be based on the best available scientific evidence, with randomised trials as the gold standard of evidence, rather than tradition, personal judgement, or other influences. Evidence-based education is related to evidence-based teaching, evidence-based learning, and school effectiveness research.

<span class="mw-page-title-main">Differentiated instruction</span> Framework or philosophy for effective teaching

Differentiated instruction and assessment, also known as differentiated learning or, in education, simply, differentiation, is a framework or philosophy for effective teaching that involves providing all students within their diverse classroom community of learners a range of different avenues for understanding new information in terms of: acquiring content; processing, constructing, or making sense of ideas; and developing teaching materials and assessment measures so that all students within a classroom can learn effectively, regardless of differences in their ability. Differentiated instruction means using different tools, content, and due process in order to successfully reach all individuals. Differentiated instruction, according to Carol Ann Tomlinson, is the process of "ensuring that what a student learns, how he or she learns it, and how the student demonstrates what he or she has learned is a match for that student's readiness level, interests, and preferred mode of learning." According to Boelens et al. (2018), differentiation can be on two different levels: the administration level and the classroom level. The administration level takes the socioeconomic status and gender of students into consideration. At the classroom level, differentiation revolves around content, processing, product, and effects. On the content level, teachers adapt what they are teaching to meet the needs of students. This can mean making content more challenging or simplified for students based on their levels. The process of learning can be differentiated as well. Teachers may choose to teach individually at a time, assign problems to small groups, partners or the whole group depending on the needs of the students. By differentiating product, teachers decide how students will present what they have learned. This may take the form of videos, graphic organizers, photo presentations, writing, and oral presentations. All these take place in a safe classroom environment where students feel respected and valued—effects.

Teacher quality assessment commonly includes reviews of qualifications, tests of teacher knowledge, observations of practice, and measurements of student learning gains. Assessments of teacher quality are currently used for policymaking, employment and tenure decisions, teacher evaluations, merit pay awards, and as data to inform the professional growth of teachers.

The gradual release of responsibility (GRR) model is a structured method of pedagogy centred on devolving responsibility within the learning process from the teacher to the learner. This approach requires the teacher to initially take on all the responsibility for a task, transitioning in stages to the students assuming full independence in carrying it out. The goal is to cultivate confident learners and thinkers who are capable of handling tasks even in areas where they have not yet gained expertise.

Data-informed decision-making (DIDM) gives reference to the collection and analysis of data to guide decisions that improve success. Another form of this process is referred to as data-driven decision-making, "which is defined similarly as making decisions based on hard data as opposed to intuition, observation, or guesswork." DIDM is used in education communities and is also used in other fields in which data is used to inform decisions. While "data based decision-making" is a more common term, "data-informed decision-making" is the preferred term, since decisions should not be based solely on quantitative data. Data-driven decision-making is commonly used in the context of business growth and entrepreneurship. Many educators have access to data system for analyzing student data. These data systems present data to educators in an over-the-counter data format to improve the success of educators' data-informed decision-making. In business, fostering and actively supporting data-driven decision-making in their firm and among their colleagues may be one of the central responsibilities of CIOs or CDOs.

Robert J. Marzano is an educational researcher in the United States. He has done educational research and theory on the topics of standards-based assessment, cognition, high-yield teaching strategies, and school leadership, including the development of practical programs and tools for teachers and administrators in K–12 schools.

<span class="mw-page-title-main">Teacher leadership</span> Teachers that take on additional administrative roles outside of the classroom

Teacher leadership is a term used in K-12 schools for classroom educators who simultaneously take on administrative roles outside of their classrooms to assist in functions of the larger school system. Teacher leadership tasks may include but are not limited to: managing teaching, learning, and resource allocation. Teachers who engage in leadership roles are generally experienced and respected in their field which can both empower them and increase collaboration among peers.

Instructional leadership is generally defined as the management of curriculum and instruction by a school principal. This term appeared as a result of research associated with the effective school movement of the 1980s, which revealed that the key to running successful schools lies in the principals' role. However, the concept of instructional leadership is recently stretched out to include more distributed models which emphasize distributed and shared empowerment among school staff, for example distributed leadership, shared leadership, and transformational leadership.

Educator effectiveness is a United States K-12 school system education policy initiative that measures the quality of an educator performance in terms of improving student learning. It describes a variety of methods, such as observations, student assessments, student work samples and examples of teacher work, that education leaders use to determine the effectiveness of a K-12 educator.

Data based decision making or data driven decision making refers to educator’s ongoing process of collecting and analyzing different types of data, including demographic, student achievement test, satisfaction, process data to guide decisions towards improvement of educational process. DDDM becomes more important in education since federal and state test-based accountability policies. No Child Left Behind Act opens broader opportunities and incentives in using data by educational organizations by requiring schools and districts to analyze additional components of data, as well as pressing them to increase student test scores. Information makes schools accountable for year by year improvement various student groups. DDDM helps to recognize the problem and who is affected by the problem.

Paul J. Black is a British educational researcher, physicist and a current Professor Emeritus at King's College London. Black was previously Professor of Science Education and Director of the Centre for Science and Mathematics Education at the Chelsea College of Science and Technology and Head for Educational Studies at King's College London. He is a former Chair for the Task Group on Assessment and Testing and Deputy Chair of the National Curriculum Council, and is recognised as an architect of the national curriculum testing regime and the national curriculum for Science.