I have found during my limited experience teaching in a classroom that I very much value assessment, at least of the kind Walvoord identifies as “Classroom assessment,” which “takes place within the confines of a single class” when “The instructor examines student work, talks with students about what worked for them, and then make changes to his or her pedagogy or classroom activities” (Assessment Clear and Simple 4). Finding that students in general provide little feedback on the Student Evaluation of Teaching forms they submit at the end of the semester, I’ve employed informal, anonymous feedback throughout the semester to assess students’ concerns and thoughts on how the course is proceeding, full class discussions on what students think would be more helpful as the course concludes and/or what I might consider changing if/when I reteach the course, and (in the case of a particularly troublesome and unengaged class) written feedback in response essay form for extra credit. That said, I think I am particularly concerned about how my courses are serving my students’ learning needs and desires, and I find that I cannot schedule out an entire semester of class sessions in advance because I like and almost need the flexibility to adapt to each classroom’s specific needs. For instance, seeing that my current Freshman composition class seems to be struggling from not devoting enough time to writing, and that offering extra credit for attending scholarly events on campus is unfair given the number of working students in my classroom, I have determined to offer a “Daily Writing Challenge” for extra credit for the remainder of the course, whereby students will demonstrate significant progress on their writing assignments (150 new words, significant revision of 250+ words, free writing for brainstorming, reflection and plans for completion, outlines, etc.) for the course through daily posts to discussion boards via Blackboard. As part of fair play and to also encourage my own writing progress, I will also be participating as well, and any days that I miss become freebie days for my students for that week. I think it’s a win-win plan (should my students take me up on the challenge) and I’m quite happy to offer extra credit to students for demonstrating applied effort in their writing processes.
I’m less sure when it comes to a more general assessment of student learning, in particular because this assessment frequently turns quantitative and I find writing (and literature) skills to be largely unquantifiable given the individual nature of each person’s performance. These subjects are also very subjective, as there is often no right or wrong answer. I’ve had discussion with other instructors that clearly showed that we value different features in student writing and knowledge, that what I think is a great thesis is not detailed enough for a different instructor, or that what I find to be a flawed argument (or interpretation) is acceptable to another instructor. I think Ulrich’s discussion of what her department values vs. what those outside her department value in terms of assessment sums up my reservations:
“Although this sort of quantified date often elicits high praise from outside of our department, we find the usefulness of this purely quantitative method to be limited at best. As the person largely responsible for designing, implementing, and coordinating our program assessment process, I confess that my response to increasing demands for quantitative data has been to flood the local reporting terrain with wave after wave of statistics, generating a data-deluge designed to drown (out) those very demands so that we can get on with the business of teaching our students and figuring out how best to improve their learning.” (“English Program Assessment and the Institutional Context 4-5)
Part of my fear comes from my own experience with an attempt to quantify knowledge and skill in the GRE and GRE Literature Subject exams. Taken towards the end of my undergraduate program and then again towards the end of my Master’s degree, my writing score dropped several points from Test 1 to Test 2, despite my increased experience and skill in writing analytical essays. Moreover, the subject test examines knowledge of facts more than anything else, and I first scored in the 29th percentile, an embarrassingly low score given my English degree. After studying with the help of a study-guide handbook, I was able to raise my score to the 62nd percentile, which is still low considering I had almost completed my Master’s at the time. These tests did not reflect my ability to succeed in a graduate program in English literature, but that is exactly what they claim to do. There are simply too many variables in our field for me to feel comfortable quantifying it through assessment, and I struggle to view assessment on a large, department- or university-wide scale in way that is not quantitative.