Feedback & Assessment

            Below, I have documented three artifacts from my growth in the feedback and assessment. The first artifact is a grammar quiz that I gave my first year spring. The next artifact is the feedback, complete with filled out rubric, which I gave following the composition of an original poem during the spring of my first year. The final artifact is the rubric I developed for all analytical writing assignments during the second year.

             I arrived at teaching with very little regard to the complexities of assessment and feedback. I was of the opinion that rubrics arbitrarily objectified what should be a subjective evaluation of writing. The feedback I had received as a student that made the biggest impact was usually within line editing - remarks and questions posed on the sides of my paragraphs. I liked actionable feedback. But I also greatly cherished (and still remember) personalized feedback. My teachers who wrote notes on the end of tests and papers that began "Dear Ethan," felt more real. Unlike a lot of my peers, I did not want to be evaluated without the full context of who I was in my writing. 

             When I stepped into my fellowship, I was handed the sets of assessments from my department. While I could change bonuses and a question or two, I largely kept everything the same that first year, looking to learn from my colleagues who had been designing assessments for decades. Among these artifacts are two things primarily designed by others - the original poem assignment and rubric, as well as the grammar quiz. The former is entirely the creation of my mentor, and here you can see my attempts to retrofit my feedback onto it. The quiz maintains the original format, but I have made steps to move the questions towards what I would eventually do for my inquiry - to learn more about my developing approach to assessment in Inquiry, click here. From the rubric below, you can clearly see more of what I tried to do in my classroom.








             In this assessment, I assess the students' understanding of functions of prep phrases through a few different angles. I feel it does a good job of testing structural knowledge of the concept and the application of that. The sentences I came up with work with a variety of prep phrases that offer unique challenges. The questions do not become repetitive, as many grammar quizzes do. However, in looking back, I do not see enough opportunity for the students to think here. The test sought correct answers rather than understanding. In so doing, I could not give proper feedback. "When giving students feedback on both oral and written work," Paul Black et al. (2004) write, "it is the nature, rather than the amount, of commentary that is critical." With this quiz, I could put plenty of red pen to signal right or wrong, but I could give no feedback on habits of mind, or on procedural techniques. I could not provide information on what next other than asking how a student studied. The quiz, part of frequent grammar quizzing we did to lower stakes, played a formative role in the yearlong development of habits of mind, but I offered no feedback there. This issue proved a main point of interest during my inquiry research in my grammar instruction, and you can see how I grapple with developing new more authentic assessments that allow for process level feedback in my inquiry research. 


            Grant Wiggins (2011) writes that an assessment "must offer students a genuine intellectual challenge, and teachers must be involved in designing the test if it is to be an effective point of leverage" (p. 82). As you can see in the second artifact, I learned this lesson first hand. I had such a hard time giving Henry feedback within the rubric that my comments faltered. Frequently they poured out in a completely disorganized way, or I had little guidance as to what to say and chose to say nothing. Here, I felt like my feedback needed to be constrained to the rubric, something I moved far away from in year two. When grading a paper like this one where the student struggled, I filled out the rubric first in pencil to make sure the grade met what I felt the quality deserved. To me, that was the clearest sign that I needed to rebuild something that assessed what I felt made for a good paper. The rubric should work for me, and I should not be bending to it. With regards to the feedback on this particular paper, I see now my hesitancy in being objective with my feedback, as the student's response to previous constructive feedback had been to crumple up the paper before reading it, thus not improving. I was walking on eggshells, wanting him not to retreat from too much red ink. 

            In developing my own rubric, I leaned heavily on Susan Brookhart's (2013) "How to Create and Use Rubrics for Formative Assessment and Grading." Instead of making the rubric a device geared to evaluate pinpointed things, I wanted it instead to provide a descriptive review of the performance. I followed the idea that "rubrics give structure to observations," asking myself what the core skills I was assessing for would involve (Brookheart). Instead of making the descriptions sound like a box to check as on the previous rubric, here I tried to ask questions to guide student's understanding of what I was assessing. Making those questions focused could not only help me in my evaluation, but also the students who were looking to process the feedback. "To write rubrics, teachers need to focus on the criteria by which learning will be assessed," Brookhart uses, "this focus on what you intend students to learn rather than what you intend to teach actually helps improve instruction" (Brookhart). As I rewrote this rubric, and asked myself what I truly valued in their writing development at each stage of the year, I found that hugely helpful for my teaching as Brookhart suggests will happen, as I had a vision, a clarity of where and how to go. One adjustment I made from the research was to keep numbers, as opposed to bins, because my middle-school students really wanted specific numbers. I felt limiting those numbers to 5 rather than 10 made that evaluation between numbers easier, but it also took into account this age group's desire for transparency. The reception to this rubric buoyed me, and students told me they were much better able to use a rubric as a measuring tool during the writing process.

             I leave this segment of Teaching & Learning still developing, especially in creating more authentic assessments. Where working on rubrics and writing feedback was a big focus of mine this year, in future years, I want to consider more options for assessment outside of traditional essay and quiz. Now that I feel comfortable working with essays, I think there is a vast world of potential assessments for me to explore in my classroom. 

This is an assessment given in mid-April, 2017

(Click on PDF logo to open file)

Assessment Goals:

  • To assess students' understanding in a variety of ways

  • To assess and build good study habits

  • To build test-taking skills, emphasizing direction reading and proofreading

This is feedback given for a poem in May, 2017 - the student had previously crumpled up and thrown out every paper he had received. This one he refused to pick up entirely. 


This is the rubric I developed for analytical writing in my second year. This particular rubric is for a second semester essay in the spring of 2019. 


Black, P., et al. (2004). Working inside the black box: assessment for learning in the classroom. Phi Delta Kappan. 9-21. 

Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. ASCD. 

Wiggins, G. (2011). A true test: toward more authentic and equitable assessment. Kappan Magazine92(7). 81-93.