Writing center assessment refers to a set of practices used to evaluate writing center spaces. Writing center assessment builds on the larger theories of writing assessment methods and applications by focusing on how those processes can be applied to writing center contexts. In many cases, writing center assessment and any assessment of academic support structures in university settings builds on programmatic assessment principles as well. [1] As a result, writing center assessment can be considered a branch of programmatic assessment, and the methods and approaches used here can be applied to a range of academic support structures, such as digital studio spaces.
While writing centers have been prominent features in university settings dating back to the 1970s in American higher education, questions remain about the role of the writing center in improving student writing ability. [2] In discussing the lack of discussion about writing center assessment, Casey Jones compares writing centers to the group Alcoholics Anonymous, claiming that "both AA and writing labs have similar features" yet "the structure of AA complicates empirical research, the desired outcome, sobriety, can be clearly defined and measured. The clear-cut assessment of writing performance is a far more elusive task". [2] Between 1985 and 1989 the Writing Lab Newsletter, a popular publication among writing center directors, lacked discussion of hard evaluation of writing centers, illustrating the early lack of discussion about assessment in the context of writing centers, instead focusing primarily on advice and how-to guides. [3] In many cases, writing center directors or writing program administrators (WPAs) are responsible for assessing writing centers, and must communicate these results to academic administration and various stakeholders. [4] Assessment is seen as beneficial for writing centers because it leads us to assume the professional and ethical behaviors important not just for writing centers but for all higher education. [5]
One of the major sources of methods and approaches to writing center assessment comes from writing assessment at large, and programmatic assessment. James Bell argues that directors of writing centers should "turn to educational program evaluation and select general types of evaluations most appropriate for writing centers". [6] Writing center assessment methods can largely be divided into two major forms of methods: qualitative and quantitative. Qualitative methods are predicated on the desire to understand teaching and learning from the actions and perspectives of teachers and learners, and has largely dominated knowledge making in composition studies, particularly in the last twenty years. [7] Quantitative methods, meanwhile, stem from the belief that the world works in predictable patterns, ones that might be isolated in terms of their causes and effects or the strengths of their relationships (i.e., correlation). [7] The use of quantitative methods in writing center contexts leaves room for issues to arise, however, such as data being interpreted incorrectly to support the work of the writing center, [8] or not choosing appropriate data to measure student success like ACT writing test scores or course grades in first-year composition courses. [9] [10] Some writing scholars endorse quantitative methods more thoroughly than others, and see them as most helpful when reframed in a postmodern epistemology since most writing center directors subscribe to a theory of epistemology that sees knowledge as constructed, tenuous, and relative. [11] Writing center scholars such as Stephen North group these methodologies into three larger approaches: Reflections on Experience, or looking back of writing center events to help others out in similar situations; Speculation, or a theory of how writing centers should work; and Surveys, or what he champions as enumeration. [12] Fitting into and blending these methods, several writing studies scholars have published articles on methods used in assessing different elements of writing centers which can be seen in the sections below.
One method of assessment used in writing center contexts is focus groups. In writing centers, using this method allows writing center directors to collect responses to specific questions and to use the social dynamics of the group to allow for the participants to play off one another's answers, resulting in changes that can be implemented rapidly to make their organization or product more productive. [13] For writing center assessment, focus groups should be about 7-12 people. [13]
Another common method of assessing writing centers is use of surveys, one of the most common quantitative methods used to gather data in writing centers. [12] This fits into the notion of enumeration mentioned by North above. Surveys are commonly used to determine information such as student satisfaction with tutoring sessions in the form of a post-session survey, or the confidence of students as writers following their sessions in the writing center. [11] Due to the nature of tutoring sessions, collecting this type of data in the middle of sessions may prove difficult, and as such, while writing in 1984, North claimed that "there is not a single published study of what happens in writing center tutorials". [12] Typically, surveys determine the number of students seen, number of hours tutored, reaction of students to center, reaction of teachers to center, and so on. [12]
Recording sessions are seen by some writing center scholars as a viable method of data gathering that answers critiques from the likes of Stephen North about the lack of research regarding what happens during tutoring sessions. [12] To accomplish this, writing center directors using this method explicitly study what happens during a tutoring session using audio or video tapes and analyzing the transcripts. [5]
Assessment plans are encouraged by some writing center scholars as a means of planning and enacting improvement of centers. Discussed in many forms, several writing center scholars advise directors to develop assessment plans, and provide a series of approaches for doing so. These typically begin with figuring out what to measure, validating these plans, and presenting these findings to the relevant stakeholders.
One prominent example of an assessment plan can be seen in the Virginia Commonwealth Assessment Plan. [5] In discussing the VCAP, Isabelle Thompson lists six general heuristics of program assessment that fit into this context. According to her, program assessment and improvement should be: [5]
According to Thompson, in order to develop an assessment plan, writing center directors should: [5]
Others like Neal Lerner endorse frameworks for assessment plans for writing centers that consist of heuristics such as determining: who participates in the writing center, what students need from the writing center, how satisfied students are with the writing center, identifying campus environments, outcomes, finding comparable institution assessments, analyzing nationally accepted standards, and measuring cost-effectiveness. [15]
Assessment of writing relies on the concept of validity, or insuring that you measure what you intend to measure. [16] Chris Gallagher supports developing writing assessments locally, something that many scholars in writing assessment firmly support, [17] [18] [19] but adds that we should validate our assessment methods and choices on a larger scale [20] and suggests the following heuristics for doing so in his Assessment Quality Review Heuristic:
After designing and implementing an assessment plan in writing center contexts, assessment experts advise considering how this information is provided to the rest of the administrators in the university setting. [21] [22] Writing center practitioners recommend that directors of these spaces balance the usefulness of findings in writing center assessments to the improvement of the space itself, and rhetorically appealing to the intended audience. [7] Some administrators advise using quantifiable data, and connecting that data to important concepts in a given university, like retention, persistence, and time-to-degree, though the important factors to assess and present may vary depending on what a given university administration values. [22]
In their book Building Writing Center Assessments that Matter, Ellen Schendel and William J. Macauley Jr. provide a set of heuristics for presenting information to stakeholders in the university setting:
Some of this advice, such as the desire to tell a story about the writing center space, clashes directly with advice from administrators like Josephine Koster, who claims that "administrators don't want to read essays. Directors should use bulleted lists, headings, graphs, and charts, and executive summaries in documents sent to administrators". [22] These clashes appear to support the larger importance placed on local writing assessment practices [17] [18] [19] in determining what local administrators may expect.
A teaching method is a set of principles and methods used by teachers to enable student learning. These strategies are determined partly on subject matter to be taught and partly by the nature of the learner. For a particular teaching method to be appropriate and efficient it has take into account the learner, the nature of the subject matter, and the type of learning it is supposed to bring about.
A standardized test is a test that is administered and scored in a consistent, or "standard", manner. Standardized tests are designed in such a way that the questions and interpretations are consistent and are administered and scored in a predetermined, standard manner.
The Graduate Management Admission Test is a computer adaptive test (CAT) intended to assess certain analytical, writing, quantitative, verbal, and reading skills in written English for use in admission to a graduate management program, such as a Master of Business Administration (MBA) program. Answering the test questions requires knowledge of English grammatical rules, reading comprehension, and mathematical skills such as arithmetic, algebra, and geometry. The Graduate Management Admission Council (GMAC) owns and operates the test, and states that the GMAT assesses analytical writing and problem-solving abilities while also addressing data sufficiency, logic, and critical reasoning skills that it believes to be vital to real-world business and management success. It can be taken up to five times a year but no more than eight times total. Attempts must be at least 16 days apart.
Educational assessment or educational evaluation is the systematic process of documenting and using empirical data on the knowledge, skill, attitudes, aptitude and beliefs to refine programs and improve student learning. Assessment data can be obtained from directly examining student work to assess the achievement of learning outcomes or can be based on data from which one can make inferences about learning. Assessment is often used interchangeably with test, but not limited to tests. Assessment can focus on the individual learner, the learning community, a course, an academic program, the institution, or the educational system as a whole. The word "assessment" came into use in an educational context after the Second World War.
Program evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency.
Writing centers provide students with assistance on their papers, projects, reports, multi-modal documents, web pages, and other writerly needs across disciplines. Although writing center staff are often referred to as Tutors, writing centers are primarily places for collaboration in which writers and tutors work together to help writers achieve their goals. Typical services include help with the purpose, structure, function of writing, and are geared toward writers of various levels and fields of study. The goal is to help a writer learn to address the various exigences that they may encounter with the realization that no writing is decontextualized—it always addresses a specific audience. Writing centers may offer one-on-one scheduled tutoring appointments, group tutoring, and writing workshops. Services may also include drop-in hours. Writing tutors do not assign grades to students' writing assignments.
The Education Quality and Accountability Office (EQAO) is a Crown agency of the Government of Ontario in Canada. It was legislated into creation in 1996 in response to recommendations made by the Royal Commission on Learning in February 1995.
An objective structured clinical examination (OSCE) is an approach to the assessment of clinical competence in which the components are assessed in a planned or structured way with attention being paid to the objectivity of the examination which is basically an organization framework consisting of multiple stations around which students rotate and at which students perform and are assessed on specific tasks. OSCE is a modern type of examination often used for assessment in health care disciplines.
Composition studies is the professional field of writing, research, and instruction, focusing especially on writing at the college level in the United States.
English-language Learner is a term used in some English-speaking countries such as the United States and Canada to describe a person who is learning the English language and has a native language that is not English. Some educational advocates, especially in the United States, classify these students as non-native English speakers or emergent bilinguals. Various other terms are also used to refer to students who are not proficient in English, such as English as a second language (ESL), English as an additional language (EAL), limited English proficient (LEP), culturally and linguistically diverse (CLD), non-native English speaker, bilingual students, heritage language, emergent bilingual, and language-minority students. The legal term that is used in federal legislation is 'limited English proficient'. The instruction and assessment of students, their cultural background, and the attitudes of classroom teachers towards ELLs have all been found to be factors in the achievement of these students. Several methods have been suggested to effectively teach ELLs, including integrating their home cultures into the classroom, involving them in language-appropriate content-area instruction early on, and integrating literature into their learning programs.
Holistic grading or holistic scoring, in standards-based education, is an approach to scoring essays using a simple grading structure that bases a grade on a paper's overall quality. This type of grading, which is also described as nonreductionist grading, contrasts with analytic grading, which takes more factors into account when assigning a grade. Holistic grading can also be used to assess classroom-based work. Rather than counting errors, a paper is judged holistically and often compared to an anchor paper to evaluate if it meets a writing standard. It differs from other methods of scoring written discourse in two basic ways. It treats the composition as a whole, not assigning separate values to different parts of the writing. And it uses two or more raters, with the final score derived from their independent scores. Holistic scoring has gone by other names: "non-analytic," "overall quality," "general merit," "general impression," "rapid impression." Although the value and validation of the system are a matter of debate, holistic scoring of writing is still in wide application.
Summative assessment, summative evaluation, or assessment of learning is the assessment of participants in an educational program. Summative assessments are designed to both assess the effectiveness of the program and the learning of the participants. This contrasts with formative assessment, which summarizes the participants' development at a particular time in order to inform instructors of student learning progress.
The formation of a Commission on the Future of Higher Education, also known as the Spellings Commission, was announced on September 19, 2005, by U.S. Secretary of Education Margaret Spellings. The nineteen-member commission was charged with recommending a national strategy for reforming post-secondary education, with a particular focus on how well colleges and universities are preparing students for the 21st-century workplace, as well as a secondary focus on how well high schools are preparing the students for post-secondary education. In the report, released on September 26, 2006, the Commission focuses on four key areas: access, affordability, the standards of quality in instruction, and the accountability of institutions of higher learning to their constituencies. After the report's publication, implementation of its recommendations was the responsibility of U.S. Under Secretary of Education, Sara Martinez Tucker.
Inquiry-based learning is a form of active learning that starts by posing questions, problems or scenarios. It contrasts with traditional education, which generally relies on the teacher presenting facts and their knowledge about the subject. Inquiry-based learning is often assisted by a facilitator rather than a lecturer. Inquirers will identify and research issues and questions to develop knowledge or solutions. Inquiry-based learning includes problem-based learning, and is generally used in small-scale investigations and projects, as well as research. The inquiry-based instruction is principally very closely related to the development and practice of thinking and problem-solving skills.
e-Report - transnational virtual study circles: e-learning supports for tutorship and learning groups - is a communitarian project aiming at the constitution of a repertory of reference material with regard to the development of innovative methods in the field of e-learning system for educational projects and also for distance learning in VET. The activities of research, experimentation and analysis are combined with the use of ICT with massive use of tutoring activities, learning groups and transnational virtual study circles.
Writing across the curriculum (WAC) is a movement within contemporary composition studies that concerns itself with writing in classes beyond composition, literature, and other English courses. According to a comprehensive survey performed in 2006–2007, approximately half of American institutes of higher learning have something that can be identified as a WAC program. In 2010, Thaiss and Porter defined WAC as "a program or initiative used to 'assist teachers across disciplines in using student writing as an instructional tool in their teaching'". WAC, then, is a programmatic effort to introduce multiple instructional uses of writing beyond assessment. WAC has also been part of the student-centered pedagogies movement seeking to replace teaching via one-way transmission of knowledge from teacher to student with more interactive strategies that enable students to interact with and participate in creating knowledge in the classroom.
A strategic technology plan is a specific type of strategy plan that lets an organization know where they are now and where they want to be some time in the future with regard to the technology and infrastructure in their organization. It often consists of the following sections.
Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.
Writing assessment refers to an area of study that contains theories and practices that guide the evaluation of a writer's performance or potential through a writing task. Writing assessment can be considered a combination of scholarship from composition studies and measurement theory within educational assessment. Writing assessment can also refer to the technologies and practices used to evaluate student writing and learning. An important consequence of writing assessment is that the type and manner of assessment may impact writing instruction, with consequences for the character and quality of that instruction.
Kathleen Blake Yancey is the Kellogg W. Hunt Professor of English at Florida State University in the rhetoric and composition program. Her research interests include composition studies, writing knowledge, creative non-fiction, and writing assessment.
{{cite journal}}
: Cite journal requires |journal=
(help)