Last month, I was asked to give a 1hr 15 min presentation on peer assessment to a group of faculty. It was part of a week-long course on assessment and evaluation. I was pretty nervous, but I think I managed to avoid most of the pitfalls. The feedback was good and I learned a lot from the questions people asked.
Some Examples of Feedback
“Hopefully by incorporating more peer assessment for the simple tasks will free up more of my time to help those who really need it as well as aiding me in becoming more creative instead of corrective”
“You practiced what you were preaching”
“The forms can be changed and used in my classes”
“Great facilitator — no jargon, plain talk, right to the point! Excellent. Very useful.”
“You were great! I like you! Good job! (sorry about that) :)”
“Although at first, putting some of the load on the learner may seem lazy on the part of the instructor, in actual fact, the instructor may then be able to do even more hands on training, and perhaps let thier creativity blossom when unburdened by “menial tasks”.”
“Needed more time”
“Good quality writing exercise was a bit disconnected”
“Finally a tradeswoman who can relate to the trades”
In a peer assessment workshop, participants’ assessments of me have the interesting property of also assessing them. The comments I got from this workshop were more formative than I’m used to — there were few “Great workshop” type comments, and more specific language about what exactly made it good. Of course, I loved the humour in the “You were great” comment shown above — if someone can parody something, it’s pretty convincing evidence of understanding. I also loved the comment about before-thinking and after-thinking, especially the insight into the fear of being lazy, or being seen as lazy.
Last but not least, I got a lot of verbal and non-verbal feedback from the tradespeople in the room. They let me know that they were not used to seeing a tradesperson running the show, and that they really appreciated it. It reinforced my impressions about the power of subtle cues that make people feel welcome or unwelcome (maybe a post for another day).
- Peer assessment is a process of having students improve their work based on feedback from other students
- To give useful feedback, students will need clear criteria, demonstrations of how to give good feedback, and opportunities for practice
- Peer assessment can help students improve their judgement about their own work
- Peer assessment can help students depend less on the teacher to solve simple problems
- Good quality feedback should include a clear statement of strengths and weaknesses, give specific ideas about how to improve, and focus on the student’s work, not their talent or intelligence
- Feedback based on talent or intelligence can weaken student performance, while feedback based on their work can strengthen it
I distributed this handout for people to follow. I used three slides at the beginning to introduce myself (via the goofy avatars shown here) and to show the agenda.
I was nervous enough that I wrote speaking notes that are almost script-like. I rehearsed enough that I didn’t need them most of the time.
Avoiding Pitfall #1: People feeling either patronized or left behind
I started with definitions of evaluation and assessment, and used flashcards to get feedback from the group about whether my definitions matched theirs. I also gave everyday examples of assessment (informal conversations) and evaluation (quizzes) so that it was clear that, though the wording might sound foreign, “evaluation” and “assessment” were everyday concepts. There were definitely some mumbled “Oh! That’s what they meant” comments coming from the tables, so I was glad I had taken a few minutes to review. At the same time, by asking people if my definitions agreed with theirs, I let them know that I knew they might already have some knowledge.
After introducing myself and the ideas, I asked the participants to take a few minutes to write if/how they use peer assessment so far, and what questions they have about peer assessment. Questions fell into these categories:
- How can I make sure that peer assessment is honest and helpful, not just a pat on the back for a friend, or a jab at someone they don’t like, or lashing out during a bad day?
- What if students are too intimidated/unconfident to share their work with their peers? (At least one participant worried that this could be emotionally dangerous)
- Why would students buy in — what’s in it for the assessor?
- When/for what tasks can it be used?
- Logistics: does everyone participate? Is it required? Should students’ names be on it? Should the assessment be written?
- How quick can it be? We don’t have a lot of time for touchy-feely stuff.
- Can this work with individualized learning plans, where no two students are at the same place in the curriculum?
I really didn’t see these questions coming. I was struck by how many people worried that peer assessment could jeopardize their students’ emotional well-being. That point was raised by participants ranging from School of Trades to the Health & Human Services faculty.
It dawned on me while I was standing there that for many people, their only experience of peer assessment is the “participation” grade they got from classmates on group projects, so there is a strong association with how people feel about each other. I pointed that out, and saw lots of head nodding.
Then I told them that the kind of peer assessment I was talking about specifically excluded judging people’s worth or discussing the reviewer’s feelings about the reviewee. It also wasn’t about group projects. We were going to assess solder joints, and I had never seen someone go home crying because they were told that a solder joint was dirty. It was not about people’s feelings. It was about their work.
I saw jaws drop. Some School of Trades faculty actually cheered. It really gave me pause. In these courses, and in lots of courses about education, instructors encourage us to “reflect,” and assignments are often “reflective pieces.” I have typically interpreted “reflect” to mean “assess” — in other words, analyze what went well, what didn’t, why, and what to do about it. My emotions are sometimes relevant to this process, and sometimes not. I wonder how other people interpret the directive to “reflect.” I’m starting to get the impression that at least some people think that instructors require them to “talk about your emotions,” with little strategy about why, what distinguishes a strong reflection from a weak one, or what it is supposed to accomplish.
How to get honest peer assessments?
I talked briefly about helping students generate useful feedback. One tactic that I used a lot at the beginning of the year was to collect all the assessments before I handed them to the recipient. The first few times, I wrote feedback on the feedback, passed it back to the reviewer, and had them do a second draft (based on definite criteria, like clarity, consistency, causality). Later, I might collect and read the feedback before giving it back to the recipient. I never had a problem with people being cruel, but if that had come up, it would have been easy enough to give it back to the reviewer (and have a word with them).
Another way to lower the intimidation factor is to have everyone assess everyone. This gives students an incentive to be decent and maybe a bit less clique-ish, since all their classmates will assess them in return. It also means that, even if they get some feedback from one person that’s hard to take, they will likely have a dozen more assessments that are quite positive and supportive.
Students are reluctant to “take away points” from the reviewee, so it helps that this feedback does not affect the recipient’s grade at all. It does, however, affect the reviewer’s grade; reviewing is a skill on the skill sheet, so they must complete it sooner or later. Students are quick to realize that it might as well be sooner. Also, I typically do this during class time, so I had a roughly 100% completion rate last year.
How to get useful peer assessments?
I went ahead with my plan to have workshop participants think about solder joints. A good solder joint is shiny, smooth, and clean. It has to meet a lot of other criteria too, but these three are the ones I get beginning students to focus on. I showed a solder joint (you can see it in the handout) and explained that it was shiny and clean but not smooth.
Then I directed the participants to an exercise in the handout that showed 8 different versions of feedback for that joint (i.e. “This solder joint is shiny and clean, but not smooth”), and we switched from assessing soldering to assessing feedback. I asked participants to work through the feedback, determining if it met these criteria:
- Identifies strengths and weaknesses
- Gives clear suggestion about what to do next time
- Focusses on the student’s work, not their talent or intelligence
We discussed briefly which feedback examples were better than others (the example I gave above meets criteria 1 and 3, but not 2). This got people sharing their own ideas about what makes feedback good. I didn’t try to steer toward any consensus here; I just let people know if I understood their point or not. Very quickly, we were having a substantive discussion about quality feedback, even though most people had never heard of soldering before the workshop. I suggested that they try creating an exercise like this for their own classroom, as a way of clarifying their own expectations about feedback.
Avoiding Pitfall #2: This won’t work in my classroom
Surprisingly, this didn’t come up at all.
I came back often to the idea that there are things students can assess for each other and there are things they need us for. I made sure to reiterate often that each teacher would be the best judge of which tasks were which in their discipline. I also invited participants to consider whether a student could fully assess that task, or could they only assess a few of the simpler criteria? Which criteria? What must the students necessarily include in their feedback? What must they stay away from, and how is this related to the norms of their discipline? We didn’t have time to discuss this. If you were a participant in the workshop and you’re reading this, I’d love to hear what you came up with.
Pitfall #3: Disconnected/too long
Well, I wasn’t able to avoid this. After talking about peer assessments for soldering and discussing how that might generalize to other performance tasks, I had participants work through peer assessment for writing. I told them that their classmate Robin Moroney had written a summary of a newspaper article (which is sort of true — the Wall Street Journal published Moroney’s summary of Po Bronson’s analysis of Carol Dweck’s research), and asked them to write Robin some feedback. They used a slightly adjusted version of the Rubric for Assessing Reasoning that I use with my students (summarize, connect to your own experience, evaluate for clarity, consistency, causality). We didn’t really have time to discuss this, so Dweck’s ideas got lost in the shuffle, and I was only able to nod toward the questions we’d collected at the beginning, encouraging people to come talk afterwards if their questions hadn’t been fully answered.
Questions that didn’t get answered:
Some teachers at the college use an “individualized system of instruction” — in other words, it is more like a group tutoring session than a class. The group meets at a specified time but each student is working at their own pace. I didn’t have time to discuss this with the teacher who asked, but I wonder if the students would benefit from assessing “fake” student work, or past students’ work (anonymized), or the teacher’s work?
One teacher mentioned a student who was adamant that peer assessment violated their privacy, that only the teacher should see it. I never ran into this problem, so I’m not sure what would work best. A few ideas I might try: have students assess “fake” work at first, so they can get the hang of it and get comfortable with the idea, or remove names from work so that students don’t know who they’re assessing. In my field, it’s pretty typical for people to inspect each other’s work; in fields where that is true, I would sell it as workplace preparation.
We didn’t get a chance to flush out decision-making criteria for which tasks would benefit from peer assessment. My practice has been to assign peer assessment for tasks where people are demonstrating knowledge or skill, not attitude or opinion. Mostly, that’s because attitudes and opinions are not assessable for accuracy. (Note the stipulative definitions here… if we are discussing the quality of reasoning in a student’s work, then by definition the work is a judgment call, not an opinion). I suppose I could have students assess each other’s opinions and attitudes for clarity — not whether your position is right or wrong, but whether I can understand what your position is. I don’t do this, and I guess that’s my way of addressing the privacy aspect; I’d have to have a very strong reason before I’d force people to share their feelings, with me or anyone else.
Obviously I encourage students to share their feelings in lots of big and small ways. In practice, they do — quite a lot. But I can’t see my way clear to requiring it. Partly it’s because that is not typically a part of the discipline we’re in. Partly it’s because I hate it, myself. At best, it becomes inauthentic. The very prospect of forcing people to share their feelings seems to make them want to do it less. It also devalues students’ decision-making about their own boundaries — their judgment about when an environment is respectful enough toward them, and when their sharing will be respectful toward others. I’m trying to help them get better at making those decisions themselves — not make those decisions for them. Talking about this distinction during peer assessment exercises gives me an excuse to discuss the difference between a judgment and an opinion. Judgments are fair game, and must be assessed for good-quality reasoning. Opinions are feelings are not. We can share them and agree or disagree with them, but I don’t consider that to be assessment.
Finally, a participant asked about how to build student buy-in. Students might ask, what’s in it for me? What I’ve found is that it only takes a round or two of peer assessments for students to start looking forward to getting their feedback from classmates. They read it voraciously, with much more interest than they read feedback from me. In the end, people love reading about themselves.