Writing center assessment

Last updated

Writing center assessment refers to a set of practices used to evaluate writing center spaces. Writing center assessment builds on the larger theories of writing assessment methods and applications by focusing on how those processes can be applied to writing center contexts. In many cases, writing center assessment and any assessment of academic support structures in university settings builds on programmatic assessment principles as well. [1] As a result, writing center assessment can be considered a branch of programmatic assessment, and the methods and approaches used here can be applied to a range of academic support structures, such as digital studio spaces.

Contents

History

While writing centers have been prominent features in university settings dating back to the 1970s in American higher education, questions remain about the role of the writing center in improving student writing ability. [2] In discussing the lack of discussion about writing center assessment, Casey Jones compares writing centers to the group Alcoholics Anonymous, claiming that "both AA and writing labs have similar features" yet "the structure of AA complicates empirical research, the desired outcome, sobriety, can be clearly defined and measured. The clear-cut assessment of writing performance is a far more elusive task". [2] Between 1985 and 1989 the Writing Lab Newsletter, a popular publication among writing center directors, lacked discussion of hard evaluation of writing centers, illustrating the early lack of discussion about assessment in the context of writing centers, instead focusing primarily on advice and how-to guides. [3] In many cases, writing center directors or writing program administrators (WPAs) are responsible for assessing writing centers, and must communicate these results to academic administration and various stakeholders. [4] Assessment is seen as beneficial for writing centers because it leads us to assume the professional and ethical behaviors important not just for writing centers but for all higher education. [5]

Methods

One of the major sources of methods and approaches to writing center assessment comes from writing assessment at large, and programmatic assessment. James Bell argues that directors of writing centers should "turn to educational program evaluation and select general types of evaluations most appropriate for writing centers". [6] Writing center assessment methods can largely be divided into two major forms of methods: qualitative and quantitative. Qualitative methods are predicated on the desire to understand teaching and learning from the actions and perspectives of teachers and learners, and has largely dominated knowledge making in composition studies, particularly in the last twenty years. [7] Quantitative methods, meanwhile, stem from the belief that the world works in predictable patterns, ones that might be isolated in terms of their causes and effects or the strengths of their relationships (i.e., correlation). [7] The use of quantitative methods in writing center contexts leaves room for issues to arise, however, such as data being interpreted incorrectly to support the work of the writing center, [8] or not choosing appropriate data to measure student success like ACT writing test scores or course grades in first-year composition courses. [9] [10] Some writing scholars endorse quantitative methods more thoroughly than others, and see them as most helpful when reframed in a postmodern epistemology since most writing center directors subscribe to a theory of epistemology that sees knowledge as constructed, tenuous, and relative. [11] Writing center scholars such as Stephen North group these methodologies into three larger approaches: Reflections on Experience, or looking back of writing center events to help others out in similar situations; Speculation, or a theory of how writing centers should work; and Surveys, or what he champions as enumeration. [12] Fitting into and blending these methods, several writing studies scholars have published articles on methods used in assessing different elements of writing centers which can be seen in the sections below.

Focus Groups

One method of assessment used in writing center contexts is focus groups. In writing centers, using this method allows writing center directors to collect responses to specific questions and to use the social dynamics of the group to allow for the participants to play off one another's answers, resulting in changes that can be implemented rapidly to make their organization or product more productive. [13] For writing center assessment, focus groups should be about 7-12 people. [13]

Surveys

Another common method of assessing writing centers is use of surveys, one of the most common quantitative methods used to gather data in writing centers. [12] This fits into the notion of enumeration mentioned by North above. Surveys are commonly used to determine information such as student satisfaction with tutoring sessions in the form of a post-session survey, or the confidence of students as writers following their sessions in the writing center. [11] Due to the nature of tutoring sessions, collecting this type of data in the middle of sessions may prove difficult, and as such, while writing in 1984, North claimed that "there is not a single published study of what happens in writing center tutorials". [12] Typically, surveys determine the number of students seen, number of hours tutored, reaction of students to center, reaction of teachers to center, and so on. [12]

Recording Sessions

Recording sessions are seen by some writing center scholars as a viable method of data gathering that answers critiques from the likes of Stephen North about the lack of research regarding what happens during tutoring sessions. [12] To accomplish this, writing center directors using this method explicitly study what happens during a tutoring session using audio or video tapes and analyzing the transcripts. [5]

Assessment Plans

Assessment plans are encouraged by some writing center scholars as a means of planning and enacting improvement of centers. Discussed in many forms, several writing center scholars advise directors to develop assessment plans, and provide a series of approaches for doing so. These typically begin with figuring out what to measure, validating these plans, and presenting these findings to the relevant stakeholders.

Developing Assessment Plans

One prominent example of an assessment plan can be seen in the Virginia Commonwealth Assessment Plan. [5] In discussing the VCAP, Isabelle Thompson lists six general heuristics of program assessment that fit into this context. According to her, program assessment and improvement should be: [5]

According to Thompson, in order to develop an assessment plan, writing center directors should: [5]

  1. Prepare a mission statement for the writing center based on the services the center provides and aspires to provide. [7] [14]
  2. Develop goals, objectives, or intended educational outcomes for the center [1] [7] [15]
  3. Determine appropriate assessment methods for the writing center. [10]
  4. Conduct the assessment of the writing center's services.
  5. Analyze the results of the assessment and draw conclusions about the results in terms of outcomes and the current strengths and weaknesses of the writing center.
  6. Use the results to bring about improvements in the center's services. [7]

Others like Neal Lerner endorse frameworks for assessment plans for writing centers that consist of heuristics such as determining: who participates in the writing center, what students need from the writing center, how satisfied students are with the writing center, identifying campus environments, outcomes, finding comparable institution assessments, analyzing nationally accepted standards, and measuring cost-effectiveness. [15]

Validating Assessment Plans

Assessment of writing relies on the concept of validity, or insuring that you measure what you intend to measure. [16] Chris Gallagher supports developing writing assessments locally, something that many scholars in writing assessment firmly support, [17] [18] [19] but adds that we should validate our assessment methods and choices on a larger scale [20] and suggests the following heuristics for doing so in his Assessment Quality Review Heuristic:

  1. Briefly describe the writing program, including curricular and instructional goals, institutional constraints and opportunities (e.g. resources issues, labor conditions, professional development offerings), and student and teacher demographics. Append relevant documentation.
  2. Briefly describe the assessment and its relationship, if any, to other assessments conducted in the program. If this assessment is part of an overall assessment plan, append the plan.
  3. Answer the following questions about the assessment under review:
    • Meaningful:
      • What are the purposes of this assessment? What are its intended uses? How were these purposes arrived at? Who formulated them? Why and to whom are those purposes significant? How were these purposes made known to students and teachers? How does the content of the assessment match its purpose?
    • Appropriate
      • How is the assessment suitable for this context, these participants, and its intended purposes and uses? How does the assessment reflect the values, beliefs, and aspirations of the participants and their immediate communities?
    • Useful
      • How does the assessment help students learn and help teachers teach? How does the assessment provide information that may be used to improve teaching and learning, curriculum, professional development, program policies, accountability, etc.? Who will use the information generated from this assessment and for what purposes?
    • Fair
      • How does the assessment ensure that all students are able to do and demonstrate their best work? How does the assessment contribute to the creation or maintenance of appropriate working conditions for teachers and students? How does it ensure adequate compensation and/or recognition for the labor required to produce it?
    • Trustworthy
      • How are the assessment results arrived at and by whom? How does the assessment ensure that these results represent the best professional judgment of educators? How does the assessment ensure that the results derive from a process that honors articulated differences even as it seeks common ground for decisions?
    • Just
      • What are the intended and unintended consequences of this assessment—for students, teachers, administrators, the program, the institution, etc.? How does the assessment ensure that these consequences are in the best interest of participants, especially students and teachers?
  4. In light of this review, what changes, if any, do you plan to make to this assessment? [20]

Presenting Findings to Stakeholders

After designing and implementing an assessment plan in writing center contexts, assessment experts advise considering how this information is provided to the rest of the administrators in the university setting. [21] [22] Writing center practitioners recommend that directors of these spaces balance the usefulness of findings in writing center assessments to the improvement of the space itself, and rhetorically appealing to the intended audience. [7] Some administrators advise using quantifiable data, and connecting that data to important concepts in a given university, like retention, persistence, and time-to-degree, though the important factors to assess and present may vary depending on what a given university administration values. [22]

In their book Building Writing Center Assessments that Matter, Ellen Schendel and William J. Macauley Jr. provide a set of heuristics for presenting information to stakeholders in the university setting:

Some of this advice, such as the desire to tell a story about the writing center space, clashes directly with advice from administrators like Josephine Koster, who claims that "administrators don't want to read essays. Directors should use bulleted lists, headings, graphs, and charts, and executive summaries in documents sent to administrators". [22] These clashes appear to support the larger importance placed on local writing assessment practices [17] [18] [19] in determining what local administrators may expect.

See also

Related Research Articles

A teaching method is a set of principles and methods used by teachers to enable student learning. These strategies are determined partly on subject matter to be taught and partly by the nature of the learner. For a particular teaching method to be appropriate and efficient it has take into account the learner, the nature of the subject matter, and the type of learning it is supposed to bring about.

<span class="mw-page-title-main">Standardized test</span> Test administered and scored in a predetermined, standard manner

A standardized test is a test that is administered and scored in a consistent, or "standard", manner. Standardized tests are designed in such a way that the questions and interpretations are consistent and are administered and scored in a predetermined, standard manner.

<span class="mw-page-title-main">Graduate Management Admission Test</span> Computer adaptive test (CAT)

The Graduate Management Admission Test is a computer adaptive test (CAT) intended to assess certain analytical, writing, quantitative, verbal, and reading skills in written English for use in admission to a graduate management program, such as a Master of Business Administration (MBA) program. Answering the test questions requires knowledge of English grammatical rules, reading comprehension, and mathematical skills such as arithmetic, algebra, and geometry. The Graduate Management Admission Council (GMAC) owns and operates the test, and states that the GMAT assesses analytical writing and problem-solving abilities while also addressing data sufficiency, logic, and critical reasoning skills that it believes to be vital to real-world business and management success. It can be taken up to five times a year but no more than eight times total. Attempts must be at least 16 days apart.

Educational assessment or educational evaluation is the systematic process of documenting and using empirical data on the knowledge, skill, attitudes, aptitude and beliefs to refine programs and improve student learning. Assessment data can be obtained from directly examining student work to assess the achievement of learning outcomes or can be based on data from which one can make inferences about learning. Assessment is often used interchangeably with test, but not limited to tests. Assessment can focus on the individual learner, the learning community, a course, an academic program, the institution, or the educational system as a whole. The word "assessment" came into use in an educational context after the Second World War.

Program evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency.

<span class="mw-page-title-main">Writing center</span>

Writing centers provide students with assistance on their papers, projects, reports, multi-modal documents, web pages, and other writerly needs across disciplines. Although writing center staff are often referred to as Tutors, writing centers are primarily places for collaboration in which writers and tutors work together to help writers achieve their goals. Typical services include help with the purpose, structure, function of writing, and are geared toward writers of various levels and fields of study. The goal is to help a writer learn to address the various exigences that they may encounter with the realization that no writing is decontextualized—it always addresses a specific audience. Writing centers may offer one-on-one scheduled tutoring appointments, group tutoring, and writing workshops. Services may also include drop-in hours. Writing tutors do not assign grades to students' writing assignments.

The Education Quality and Accountability Office (EQAO) is a Crown agency of the Government of Ontario in Canada. It was legislated into creation in 1996 in response to recommendations made by the Royal Commission on Learning in February 1995.

An objective structured clinical examination (OSCE) is an approach to the assessment of clinical competence in which the components are assessed in a planned or structured way with attention being paid to the objectivity of the examination which is basically an organization framework consisting of multiple stations around which students rotate and at which students perform and are assessed on specific tasks. OSCE is a modern type of examination often used for assessment in health care disciplines.

<span class="mw-page-title-main">Composition studies</span>

Composition studies is the professional field of writing, research, and instruction, focusing especially on writing at the college level in the United States.

English-language Learner is a term used in some English-speaking countries such as the United States and Canada to describe a person who is learning the English language and has a native language that is not English. Some educational advocates, especially in the United States, classify these students as non-native English speakers or emergent bilinguals. Various other terms are also used to refer to students who are not proficient in English, such as English as a second language (ESL), English as an additional language (EAL), limited English proficient (LEP), culturally and linguistically diverse (CLD), non-native English speaker, bilingual students, heritage language, emergent bilingual, and language-minority students. The legal term that is used in federal legislation is 'limited English proficient'. The instruction and assessment of students, their cultural background, and the attitudes of classroom teachers towards ELLs have all been found to be factors in the achievement of these students. Several methods have been suggested to effectively teach ELLs, including integrating their home cultures into the classroom, involving them in language-appropriate content-area instruction early on, and integrating literature into their learning programs.

Holistic grading or holistic scoring, in standards-based education, is an approach to scoring essays using a simple grading structure that bases a grade on a paper's overall quality. This type of grading, which is also described as nonreductionist grading, contrasts with analytic grading, which takes more factors into account when assigning a grade. Holistic grading can also be used to assess classroom-based work. Rather than counting errors, a paper is judged holistically and often compared to an anchor paper to evaluate if it meets a writing standard. It differs from other methods of scoring written discourse in two basic ways. It treats the composition as a whole, not assigning separate values to different parts of the writing. And it uses two or more raters, with the final score derived from their independent scores. Holistic scoring has gone by other names: "non-analytic," "overall quality," "general merit," "general impression," "rapid impression." Although the value and validation of the system are a matter of debate, holistic scoring of writing is still in wide application.

<span class="mw-page-title-main">Summative assessment</span> Assessment used to determine student outcomes after an academic course

Summative assessment, summative evaluation, or assessment of learning is the assessment of participants in an educational program. Summative assessments are designed to both assess the effectiveness of the program and the learning of the participants. This contrasts with formative assessment, which summarizes the participants' development at a particular time in order to inform instructors of student learning progress.

<span class="mw-page-title-main">Commission on the Future of Higher Education</span>

The formation of a Commission on the Future of Higher Education, also known as the Spellings Commission, was announced on September 19, 2005, by U.S. Secretary of Education Margaret Spellings. The nineteen-member commission was charged with recommending a national strategy for reforming post-secondary education, with a particular focus on how well colleges and universities are preparing students for the 21st-century workplace, as well as a secondary focus on how well high schools are preparing the students for post-secondary education. In the report, released on September 26, 2006, the Commission focuses on four key areas: access, affordability, the standards of quality in instruction, and the accountability of institutions of higher learning to their constituencies. After the report's publication, implementation of its recommendations was the responsibility of U.S. Under Secretary of Education, Sara Martinez Tucker.

Inquiry-based learning is a form of active learning that starts by posing questions, problems or scenarios. It contrasts with traditional education, which generally relies on the teacher presenting facts and their knowledge about the subject. Inquiry-based learning is often assisted by a facilitator rather than a lecturer. Inquirers will identify and research issues and questions to develop knowledge or solutions. Inquiry-based learning includes problem-based learning, and is generally used in small-scale investigations and projects, as well as research. The inquiry-based instruction is principally very closely related to the development and practice of thinking and problem-solving skills.

e-Report - transnational virtual study circles: e-learning supports for tutorship and learning groups - is a communitarian project aiming at the constitution of a repertory of reference material with regard to the development of innovative methods in the field of e-learning system for educational projects and also for distance learning in VET. The activities of research, experimentation and analysis are combined with the use of ICT with massive use of tutoring activities, learning groups and transnational virtual study circles.

Writing across the curriculum (WAC) is a movement within contemporary composition studies that concerns itself with writing in classes beyond composition, literature, and other English courses. According to a comprehensive survey performed in 2006–2007, approximately half of American institutes of higher learning have something that can be identified as a WAC program. In 2010, Thaiss and Porter defined WAC as "a program or initiative used to 'assist teachers across disciplines in using student writing as an instructional tool in their teaching'". WAC, then, is a programmatic effort to introduce multiple instructional uses of writing beyond assessment. WAC has also been part of the student-centered pedagogies movement seeking to replace teaching via one-way transmission of knowledge from teacher to student with more interactive strategies that enable students to interact with and participate in creating knowledge in the classroom.

A strategic technology plan is a specific type of strategy plan that lets an organization know where they are now and where they want to be some time in the future with regard to the technology and infrastructure in their organization. It often consists of the following sections.

Adaptive comparative judgement is a technique borrowed from psychophysics which is able to generate reliable results for educational assessment – as such it is an alternative to traditional exam script marking. In the approach, judges are presented with pairs of student work and are then asked to choose which is better, one or the other. By means of an iterative and adaptive algorithm, a scaled distribution of student work can then be obtained without reference to criteria.

Writing assessment refers to an area of study that contains theories and practices that guide the evaluation of a writer's performance or potential through a writing task. Writing assessment can be considered a combination of scholarship from composition studies and measurement theory within educational assessment. Writing assessment can also refer to the technologies and practices used to evaluate student writing and learning. An important consequence of writing assessment is that the type and manner of assessment may impact writing instruction, with consequences for the character and quality of that instruction.

<span class="mw-page-title-main">Kathleen Blake Yancey</span>

Kathleen Blake Yancey is the Kellogg W. Hunt Professor of English at Florida State University in the rhetoric and composition program. Her research interests include composition studies, writing knowledge, creative non-fiction, and writing assessment.

References

  1. 1 2 Bell, James H. (2001). "When Hard Questions Are Asked: Evaluating Writing Centers". The Writing Center Journal. 21 (1): 7–28.
  2. 1 2 Jones, Casey (2001). "The relationship between writing centers and improvement in writing ability: An assessment of the literature". Education: 3–20.
  3. Bell, James (March 1989). "What are We Talking About?: A Content Analysis of the Writing Lab Newsletter" (PDF). Writing Lab Newsletter. Retrieved 30 October 2015.
  4. Gallagher, Chris (Fall 2009). "What Do WPAs Need to Know about Writing Assessment? An Immodest Proposal". WPA: Writing Program Administration. Retrieved 30 October 2015.
  5. 1 2 3 4 5 Thompson, Isabelle (2006). "Writing center assessment: Why and a little how". Writing Center Journal. 26 (1): 33–54.
  6. Bell, James H. (1998). "When Hard Questions Are Asked: Evaluating Writing Centers".{{cite journal}}: Cite journal requires |journal= (help)
  7. 1 2 3 4 5 6 7 Schendel, Ellen; Macauley, William J. (2012-01-01). Building Writing Center Assessments That Matter. Logan: Utah State University Press. ISBN   9780874218343.
  8. Enders, Doug (2005). "Assessing the writing center: A qualitative tale of a quantitative study" (PDF). Writing Lab Newsletter. Retrieved 30 October 2015.
  9. Lerner, Neal (September 1997). "Counting beans and making beans count" (PDF). Writing Lab Newsletter. Retrieved 30 October 2015.
  10. 1 2 Lerner, Neal (September 2001). "Choosing beans wisely" (PDF). The Writing Lab Newsletter. Retrieved 30 October 2015.
  11. 1 2 Carino, Peter; Enders, Doug (2001). "Does Frequency of Visits to the Writing Center Increase Student Satisfaction? A Statistical Correlation Study--or Story". Writing Center Journal. 22 (1): 83–103. ISSN   0889-6143.
  12. 1 2 3 4 5 North, Stephen (1984-09-01). "Writing Center Research: Testing our Assumptions". Writing Centers: Theory and Administration. Urbana, Ill.: Natl Council of Teachers. pp. 24–35. ISBN   9780814158784.
  13. 1 2 Cushman, Tara; Marx, Lindsey; Brower, Carleigh; Holahan, Katie; Boquet, Elizabeth (March 2005). "Using focus groups to assess writing center effectiveness" (PDF). Writing Lab Newsletter. Retrieved 30 October 2015.
  14. Building Writing Center Assessments That Matter (1 ed.). Logan, Utah: Utah State University Press. 2012-09-06. p. 40. ISBN   9780874218169.
  15. 1 2 Lerner, Neal (2003-10-01). "Writing Center Assessment: Searching for the "Proof" of Our Effectiveness". Center Will Hold (1 ed.). Logan, Utah: Utah State University Press. pp. 58–73. ISBN   9780874215700.
  16. Yancey, Kathleen Blake (1999-02-01). "Looking Back as We Look Forward: Historicizing Writing Assessment". College Composition and Communication. 50 (3): 487. doi:10.2307/358862. JSTOR   358862.
  17. 1 2 O'Neill, Peggy; Moore, Cindy; Huot, Brian (2009-03-20). Guide to College Writing Assessment (1 ed.). Logan, Utah: Utah State University Press. p. 57. ISBN   9780874217322.
  18. 1 2 Broad, Bob (2003-03-01). What We Really Value: Beyond Rubrics in Teaching and Assessing Writing (1 ed.). Logan: Utah State University Press. ISBN   9780874215533.
  19. 1 2 Adler-Kassner, Linda; O'Neill, Peggy (2010-08-01). Reframing Writing Assessment to Improve Teaching and Learning (1 ed.). Logan, Utah: Utah State University Press. p. 2. ISBN   9780874217988.
  20. 1 2 Gallagher, Chris (2010). "Assess locally, validate globally: Heuristics for validating local writing assessments" (PDF). WPA: Writing Program Administration. 34 (1): 10–32. Retrieved 30 October 2015.
  21. Simpson, Jeanne (2006-03-23). "Managing Encounters with Central Administration". The Writing Center Director's Resource Book. Mahwah, N.J.: Routledge. pp. 199–214. ISBN   9780805856088.
  22. 1 2 3 Koster, Josephine (2003-10-01). "Administration Across the Curriculum: On Practicing What We Preach". Center Will Hold (1 ed.). Logan, Utah: Utah State University Press. pp. 151–165. ISBN   9780874215700.