You are currently browsing the category archive for the ‘Assessment’ category.

I expect students to correct their quizzes and “write feedback to themselves” when they apply for reassessment.  The content that I get varies widely, and most of it is not very helpful, along the lines of

I used the wrong formula

I forgot that V = IR

It was a stupid mistake, I get it now.

I was inspired by Joss Ives’ post on quiz reflection assignments to get specific about what I was looking for.  This all stems from a conversation I had with Kelly O’Shea about two years ago, back when I had launched myself into standards-based/project/flipped/inquiry/Socratic/mindset/critical thinking/whatnot all at once and unprepared, that has been poking its sharp edges into my brain ever since:

Me: Sometimes I press them to be specific about what they learned or which careless mistake they need to guard against in the future. It’s clear that many find this humiliating, some kind of ingenious psychological punishment for having made a mistake. Admitting that they learned something means admitting they didn’t know it all along, and that embarrasses them. Does that mean they’re ashamed of learning?

Kelly: How often do you think they’ve practiced the skill of consciously figuring out what caused them to make a mistake? How often do we just say, “That’s okay, you’ll get it next time.” instead of helping them pick out what went wrong? My guess is that they might not even know how to do it.

Me: *stunned silence*

So this year I developed this.

Phases of Feedback

  1. Understand what you did well
  2. Diagnose why you had trouble
  3. Improve

Steps 1 and 3 can be used even for answers that were accepted as “correct.”

This has yielded lots of interesting insight, as well as some interesting pushback. Plus, it gave me an opportunity to help my students understand what exactly “generalize” mean.  In a future post I’ll try to gather up some examples. Overall, it’s helped me communicate what I expect, and has helped students develop more insight into their thinking as well as the physics involved.

Sometimes I need to have all the students in my class improve their speed or accuracy in a particular technique.  Sometimes I just need everyone to do a few practice problems for an old topic so I can see where I should start.  But I don’t have time to make (or find) the questions, and I definitely don’t have time to go through them with a fine-toothed comb.

One approach I use is to have students individually generate and grade their own problems.  They turn in the whole, graded, thing and I write back with narrative feedback.  I get what I need (formative assessment data) and they get what they need — procedural practice, pointers from me, and some practice with self-assessment.

Note: this only works for problems that can be found in the back of a textbook, complete with answers in the appendix.

Here’s the handout I use.

What I Get Out of It

The most useful thing I get out of this is the “hard” question — the one they are unable to solve.  They are not asked to complete it: they are asked to articulate what makes that question difficult or confusing.

Important Principles

  • Students choose questions that are easy, medium, and hard for them.  This means they must learn to anticipate the difficulty level of a question before attempting it.
  • If they get a question wrong, they must either troubleshoot it or solve a different one.
  • They turn in their questions clearly marked right or wrong.

Advantages

  • I don’t have to grade it — just read it and make comments
  • The students get to practice looking at things they don’t fully understand and articulating a question about it
  • I get to find out what they know and what they (think they) don’t know.
  • Students can work together by sharing their strategies, but not by sharing their numbers, since everyone ends up choosing different problems.
  • It makes my expectations explicit about how they should do practice questions in general: with the book closed, page number and question number clearly marked, with the schematics copied onto the paper (“Even if there’s no schematic in the book?!” they ask incredulously — clearly the point of writing down the question is just to learn to be a good scribe, not to improve future search times), etc.

Lessons Learned

I give this assignment during class, or at least get it started during class, to reduce copying.  Once students have chosen and started their questions, they’re unlikely to want to change them.

My students use the same assessment rubric for practically every new source of information we encounter, whether it’s something they read in a book, data they collected, or information I present directly.  It asks them to summarize, relate to their experience, ask questions, explain what the author claims is the cause, and give support using existing ideas from the model.  The current version looks like this (click through to zoom or download):

Assessment for Learning

There are two goals:

  • to assess the author’s reasoning, and help us decide whether to accept their proposal
  • to assess one’s own understanding

If you can’t fill it in, you probably didn’t understand it.  Maybe you weren’t reading carefully, maybe it’s so poorly reasoned or written that it’s not actually understandable, or maybe you don’t have the background knowledge to digest it.  All of these conditions are important to flag, and this tool helps us do that.

The title says “Rubric for Assessing Reasoning,” but we just call them “feedbacks.”

Recently, there have been a spate of feedbacks turned in with the cause and/or the “support from the model” section left blank or filled with vague truisms (“this is supported by lots of ideas about atoms,” or “I’m looking forward to learning more about what causes this.”)

I knew the students could do better — all of them have written strong statements about cause in the past (in chains of cause and effect 2-5 steps long).  I also allow students to write a question about cause, instead of a statement, if they can’t tell what the cause is, or if they think the author hasn’t included it.

So today, after I presented my second draft of some information about RMS measurements, I showed some typical examples of causal statements and supporting ideas.  I asked students to rate them according to their significance to the question at hand, then had some small group discussions.  I was interested (and occasionally surprised) by their criteria for what makes a good statement of cause, and what makes a good supporting idea.  Here’s the handout I used to scaffold the discussions.

The students’ results:

A statement of cause should …

  • Be relevant to the question
  • Help us understand the question or the answer
  • Not leave questions unanswered
  • Give lots of info
  • Relate to the model
  • Explain what physically makes something happen or asks a question that would help you understand the physical cause
  • Help you distinguish between similar things (like the difference between Vpk, Vpp, Vrms)
  • Not beg the question (not state the same thing twice using different words)
  • Be concrete
  • Make the new ideas easier to accept
  • Use definitions

Well, I was looking for an excuse to talk about definitions — I think this is it!

Supporting ideas from the model should…

  • Help clarify how the electrons work
  • Help answer or clarify the question
  • Directly involve information to help relate ideas
  • Help us see what is going on
  • Give us reasoning so we can in turn have an explanation
  • Clarify misunderstandings
  • Allow you to generalize
  • Support the cause, specifically.
  • Be specific to the topic, not broad (like, “atoms are made of protons, electrons, and neutrons.”)
  • Not use a formula
  • It helps if you understand what’s going on, it makes it easier to find connections

The Last World

Which ones would you emphasize? What would you add?

My standard (informal) course feedback form asks,

  1. What do you like or dislike about the grading system?
  2. How does the grading system affect your learning?
  3. What do you love about this course?
  4. What do you hate about this course?
  5. What would you change about this course?

The 2nd-year courses are less science and more engineering, so my approach is less inquiry and more project-based.  In particular, in the course they’re evaluating, there’s an independent project where students must define their project, set their own deadlines, set their own evaluation scheme, then grade themselves.  It’s worth a quarter of their grade.  I reserve the right to veto a mark, but I’ve never done it.  Here’s a sample of the feedback I got from 2nd year students last week.

1. Grading system

  • Love reassessment (2)
  • Feel dependent on ActiveGrade
  • Need quicker way of knowing when a test is corrected
  • Love the independent project
  • Make reassessment deadline start when grade is updated?
  • Ability to do skills on your own time.  But they can also pile up.
  • Clearly shows what you need to know
  • Retests help a lot with understanding because you know what you need to improve on
  • Showing improvement helps solidify thoughts

2. Effects of Grading System

  • Reassessing forces you to gain understanding instead of “I failed that let’s move on”
  • I can thoroughly explain certain circuits from my head, I could not do that before.
  • Helpful — I can choose to not finish a lab if I do not understand it fully, then ask questions and come back to it
  • I knew nothing about electronics before this course but skill based learning has really helped me understand many topics

3. Love

  • Reassessing forces you to gain understanding instead of “I failed that let’s move on”
  • Lab work — hands on feel
  • Making things work and understanding what they do
  • Freedom
  • Retests, doing something more than once makes remembering it easier.

4. Hate

  • Lack of info on notch filter (2)
  • Lack of time
  • Nothing

5. Change

  • Hands on – when you don’t quite understand something, lab work refines understanding
  • It’s a pretty refined, good system.  Once you know something, it sticks with you.
  • More time to learn.  3 years?
  • Reassessment deadlines

Series circuits are one of the foundational concepts in electrical work, and one of the first things students build/think about/get assessed on in their first months at school.  My definition of two series components:

  • Two components are in series if all the current in one flows into the second, and all the current in the second comes from the first

Things I have heard about series components:

  1. Components are in series if they’re in a square shape
  2. Components are in series if all the current in one flows into the second
  3. Components are in series if they’re both connected to the power supply
  4. Components are in series if they’re aligned in a straight line

In the first year of the program, we spend a lot of time refining our ideas about which circuits have which behaviours.  We refine and revise and throw out ideas.  By the end of December we should have something fairly strong.

Last week, I had a second-year student tell me he knew that two components were in series because of reason #3 above.  I’m struggling to make sense of this, and the accountability of teaching in a trade school hangs over my head like the razor-edged pendulum in the pit.  In May, some of these students will be working on large-scale industrial robots.  These things weigh tons, carry blades and torches, and can maim or kill people in an instant.  Electronics is not an apprenticeable trade. Grads will not carry tools for a journeyman for three years — they get put right to work.  Also, electronics is not a construction trade — it is a repair trade.  That means that work is almost always done under pressure of short timelines and lost money — the electronics tech doesn’t get called out until something is broken.

I have two years to make sure they are ready to at least begin their industry-specific training.  It’s not good enough for them to sometimes make sense of things — they need nail these foundational concepts every time in order to to use the training the employer provides and make good judgement calls on the job.  Please, no comments about how education is about broadening the mind and this student is learning lots of other valuable skills.  While that’s true, it’s not currently the point. When that electronics tech does some repairs on the heart-rate monitor keeping tabs on your unborn child, you are not going to be any more interested in the tech’s broad mind than I am.

What does it mean if a student can spend 4 months in DC circuits, not fully integrate the concept of series components, pass the course, and 8 months later still have an unstable concept?

Here are all the ideas I can think of at the moment.  Don’t panic — I don’t think these are all equally likely.

  1. Their experience in DC circuits is not doing enough to help them make sense of this idea
  2. The assessments in DC circuits are not rigourous enough to catch students who are still unsure about this
  3. This student is incapable of consistently making sense of this idea, and should not have been accepted into the program in the first place
  4. It’s normal for students to form, unform, and reform their ideas about new concepts.  It’s inevitable, and sometimes students will revert to previous ways of thinking even after the fantastic course and the rigourous assessments.

If it’s #1, I’m not sure what to do.  I’ve already given over my courses to sense-making, critical thinking, and inquiring.  Do they need more class hours, more time outside class hours, or just different kinds of practice?  Maybe the practice problems are too consistent, failing to address students’ misconceptions.

If it’s #2, I’m not sure what to do.  I feel pretty confident that I’m assessing their reasoning rather than their regurgitating.  More assessments might help — not sure where to get the time.  A final exam might help.  I can’t see my way clear to passing or failing someone on the strength of a final exam, but I’d at least know a bit more about which concepts are still shaky.  I’ve sometimes given a review paper in January on the concepts learned in the previous semester, and worked through multiple drafts — I could start doing that again.

If it’s #3, I’m definitely not sure what to do.

If it’s #4, how do I reconcile this with my sense of personal responsibility to not send them out to get injured or injure someone else?  I realize I’ve framed this in a fairly dramatic way, and not every student who’s unsure of what a series circuit is will end up harming someone.  It’s much more likely that they’ll end up on the job and start to consolidate their knowledge and clear up their misconceptions.  However, it’s also likely that they’ll end up on a job where they suddenly realize that they don’t understand the basic things they’re being asked to do.  This bodes poorly for the grad’s confidence and enjoyment of their career, the employer’s willingness to hire future grads, and of course the quality of our biomedical equipment, manufacturing equipment, navigational equipment, power generation instrumentation, … .  It also bodes poorly for my ability to believe that I am doing a reasonable job.

Thoughts?

 

How I got my students to read the text before class: have them do their reading during class.

Then, the next day, I can lead a discussion among a group of people who have all tangled with the text.

It’s not transformative educational design, but it’s an improvement, with these advantages:

  1. It dramatically reduces the amount of time I spend lecturing (a.k.a. reading the students the textbook), so there’s no net gain or loss of class time.
  2. The students are filling in the standard comprehension constructor that I use for everything — assessing the author’s reasoning on a rubric.  That means they know exactly what sense-making I am asking them to engage in, and what the purpose of their reading is.
  3. When they finish reading, they hand in the assessments to me, I read them, and prepare to answer their questions for next class.  That means I’m answering the exact questions they’re wondering about — not the questions they’ve already figured out or haven’t noticed yet.
  4. Knowing that I will address their questions provides an incentive to actually ask them.  It’s not good enough to care what they think if I don’t put it into action in a way that’s actually convincing to my audience.
  5. Even in a classroom of 20 people, each person gets an individualized pace.
  6. I am free to walk around answering questions, questioning answers, and supporting those who are struggling.
  7. We’re using a remarkable technology that allows students to think at their own pace, pause as often/long as they like, rewind and repeat something as many times as they like, and (unlike videos or podcasts) remains intelligible even when skipping forward or going in slow-mo.  This amazing technology even detects when your eyes stray from it, and immediately stops sending words to your brain until your attention returns.  Its battery life is beyond compare, it boots instantly, weights less than an iPod nano, can be easily annotated (even supports multi-touch), and with the right software, can be converted from visual to auditory mode…

It’s a little bit JITT and a little bit “flipped-classroom” but without the “outside of class” part.

I often give a combination of reading materials: the original textbook source, maybe another tertiary source for comparison — e.g. a Wikipedia excerpt, then my summary and interpretation of the sources, and the inferences that I think follow from the sources.  It’s pretty similar to what I would say if I was lecturing.  I write the summaries in an informal tone intended to start a conversation.  Here’s an example:

And here’s the kind of feedback my students write to me (you’ll see my comments back to them in there too).

 

Highlights of student feedback:

Noticing connections to earlier learning

When I read about finite bandwidth, it seemed like something I should have already noticed — that amps have a limit to their bandwidth and it’s not infinite

Summarizing

When vout tries to drop, less opposing voltage is fed back to the inverting input, therefore v2 increases and compensates for the decrease in Avol

Noticing confusion or contradiction

What do f2(OL) and Av(OL) stand for?

I’m still not sure what slew-induced distortion is.

I don’t know how to make sense of the f2 = funity/Av(CL).  Is f2 the bandwidth?

In [other instructor]’s course, we built an audio monitor, and we used an op amp.  We used a somewhat low frequency (1 KHz), and we still got a gain of 22.2  If I use the equation, the bandwidth would be 45Hz?  Does this mean I can only go from 955 Hz to 1045 Hz to get a gain of 22.2?

Asking for greater precision

What is the capacitance of the internal capacitor?

Is this a “flipped classroom”?

One point that stuck with me about many “flipped classroom” conversations is designing the process so that student do the low-cognitive-load activities when they’re home or alone (watching videos, listening to podcasts) and the high-cognitive-load activities when they’re in class, surrounded by supportive peers and an experienced instructor.

This seems like a logical argument.  The trouble is that reading technical material is a high-cognitive-load activity for most of my students.  Listening to technical material is just as high-demand… with the disadvantage that if I speak it, it will be at the wrong pace for probably everyone.  The feedback above is a giant improvement over the results I got two years ago, when second year students who read the textbook would claim to be “confused” by “all of it,” or at best would pick out from the text a few bits of trivia while ignoring the most significant ideas.

The conclusion follows: have them read it in class, where I can support them.

To introduce the incoming students to my grading system, I’ll spend a class explaining, having them practice using a grading sheet, and doing some Q&A.  Last year I had them assess me using the “Teacher Skill Sheet,” and I will do that again.  It helped students understand the reasoning behind the system.

But I found that students often submitted incomplete applications for reassessment, and I wanted to create a resource they could turn to.

It’s a bit too much to digest, I think… a lot to absorb in one bite.  So I’ll introduce the components one by one: how to do a good-quality quiz correction (with inspiration from Joss), how to update the bar graph (their current grade), how to find an appropriate practice problem in the textbook.

But I’m going to put a handout in the front of their skill folders, nonetheless — for future reference.  Here’s the draft so far — comments encouraged, especially about how to make it shorter or more student-friendly.

In the same vein as the last post, here’s a breakdown of how we used published sources to build our model of how electricity works.

  1. I record questions that come up during class.  I track them on a mind-map.
  2. I pull out the list of questions and find the ones that are not measurable using our lab equipment, and relate to the unit we’re working on.
  3. I post the list at the front of the room and let students write their names next to something that interests them.  If I’m feeling stressed out about making sure they’re ready for their impending next courses/entry into the work world, I restrict the pool of questions to the ones I think are most significant.  If I’m not feeling stressed out, or the pool of questions aligns closely with our course outcomes, I let them pick whatever they want.
  4. The students prepare a first draft of a report answering the question.  They use a standard template (embedded below).  They must use at least two sources, and at least one source must be a professional-quality reference book or textbook.
  5. I collect the reports, write feedback about their clarity, consistency and causality, then hand back my comments so they can prepare a second draft.
  6. Students turn in a second draft.  If they have blatantly not addressed my concerns, back it goes for another draft.  They learn quickly not to do this.  I make a packet containing all the second drafts and photocopy the whole thing for each student. (I am so ready for 1:1 computers, it’s not funny.)
  7. I hand out the packets and the Rubric for Assessing Reasoning that we’ve been using/developing.  During that class, each student must write feedback to every other student. (Note to self — this worked with 12 students.  Will it work with 18?)
  8. I collect the feedback.  I assess it for clarity, consistency, and usefulness — does it give specific information about what the reviewee is doing well/should improve.  If the feedback meets my criteria, I update my gradebook — giving well-reasoned feedback is one of the skills on the skill sheet.
  9. If the feedback needs work, it goes back to the reviewer, who must write a second draft.  If the feedback meets the criteria (which it mostly did), then the original goes back to the reviewer, and a photocopy goes forward to the reviewee.  (Did I mention I’m ready for 1:1 computers?)
  10. Everyone now works on a new draft of their presentation, taking into account the feedback they got from their classmates.
  11. I collect the new drafts.  If I’m not confident that the class will be able to have a decent conversation about them, I might write feedback and ask for another draft. (Honest, this does not go on forever.  The maximum was 4, and that only happened once.) I make yet another packet of photocopies.
  12. Next class, we will push the desks into a “boardroom” shape, and some brave soul will volunteer to go first.  Everyone takes out two documents: the speaker’s latest draft, and the feedback they wrote to that speaker.

The speaker summarizes how they responded to people’s feedback, and tells us what they believe we can add to the model.  We evaluate each claim for clarity, consistency, causality.  We check the feedback we wrote to make sure the new draft addressed our questions.  We try to make it more precise by asking “where,” “when,” “how much,” etc.  We try to pull out as many connections to the model as we can.  The better we do this, the more ammo the class will have for answering questions on the next quiz.

Lots of questions come up that we can’t answer based on the model and the presenter’s sources.  Sometimes another student will pipe up with “I think I can answer that one with my presentation.”  Other times the question remains unanswered, waiting for the next round (or becoming a level-5 question).  As long as something gets added to the model, the presenter is marked complete for the skill called “Contribute an idea about [unit] to the model.”

We do this 4-5 times during the semester (once for each unit).

Example of a student’s first draft

I was pretty haphazard in keeping electronic records last semester.  I’ve got examples of each stage of the game, but they’re from different units — sorry for the lack of narrative flow.

This is not the strongest first draft I’ve seen; it illustrates a lot of common difficulties (on which, more below).  I do want to point out that I’m not concerned with the spelling.  I’ve talked with the technical writing instructor about possible collaborations; in the future, students might do something like submit their paper to both instructors, for different kinds of feedback.  I’m also not concerned with the informal tone.  In fact, I encourage it.  Getting the students to the point where they believe that “someone like them” can contribute to a scientific conversation, must contribute to that conversation, or indeed that science is a conversation, is a lot of ground to cover.  There is a place for formal lab reports and the conventions of intellectual discourse, but at this point in the game we hadn’t developed a need for them.

Feedback I would write to this student

Source #1: Thanks for including the description of what the letters mean.  It improves the clarity of the formula.”

Source #2: It looks like you’ve used the same source both times.  Make sure to include a second source — see me if you could use some help finding a good one.

Clarity: In source #1, the author mentions “lowercase italic letters v and i…” but I don’t see any lower case v in the formula.  Also, source #1 refers to If, but I don’t see that in the formula either. Can you clarify?

Cause: Please find at least one statement of cause and effect that you can make about this formula.  It can be something the source said or something you inferred using the model.  What is causing the effect that the formula describes?

Questions that need to be answered: That’s an interesting question.  Are you referring to the primary and secondary side of a transformer?  If so, does the source give you any information about this? If you can’t find it, bring the source with you and let’s meet to discuss.

Common trouble spots

It was typical for students to have trouble writing causal statements.  I’m looking for any cause and effect pair that connect to the topic at hand.  I think the breadth of the question is what makes it hard for students to answer.  They don’t necessarily have to tell me “what causes the voltage of a DC inductor to be described by this formula” (which would be way out of our league).  I’d be happy with “the inductor’s voltage is caused by the current changing suddenly when the circuit is turned on,” or something to that effect.  I’m not sure what to do about this, except to demonstrate that kind of thinking explicitly, and continue giving feedback.

It was also common for students to have trouble connecting ideas to the model.  If the question was about something new, they would often say “nothing in the model yet about inductors…” when they could have included any number of connections to ideas about voltage, current, resistance, atoms, etc.  I go back and forth about this.

In the example above, I could write feedback telling the student I found 5 connections to the model in my first three minutes of looking, and I expect them to find at least that many.  I could explicitly ask them to find something in the model that seemed to contradict the new idea (I actually had a separate section for contradictions in my first draft of the template).  That helped, but students too often wrote “no contradictions” without really looking.  Sometimes I just wait for the class discussion, and ask the class to come up with more connections, or ask specific questions about how this connects to X or Y.  This usually works well, because that’s the point at which they’re highly motivated to prevent poorly reasoned ideas from getting in to the model.  Still thinking about this.

Example Student Feedback

(click through to see full size)

I don’t have a copy of the original paper on “Does the thickness of wire affect resistance,” but here is some feedback a classmate wrote back.

Again, you can see that this student answered “What is the chain of cause and effect” with “No.”  Part of the problem is that this early draft of the feedback rubric asks, in the same box, if there are gaps in the chain.  In the latest draft, I have combined some of the boxes and simplified the questions.

What’s strong about this feedback: this student is noticing the relationship between cross-sectional area of a wire (gauge), and cross-sectional area of a resistor.  I think this is a strong inference, well-supported by the model.  The student has also taken care to note their own experience with different “sizes” of resistor (in other words, resistors of the same value that are cross-sectionally larger/smaller).  Finally, they propose to test that inference.  The proposed test will contradict the inference, which will lead to some great questions about power dissipation.  Here the model is working well: supporting our thinking about connections, and leading us to fruitful tests and questions.

Example of my first draft

Sometimes I wrote papers myself.  This happened if we needed 12 questions answered on a topic, but there were only 11 students.  It also happened when we did a round of class discussions only to realize that everyone’s paper depended on some foundational question being answered, but no one had chosen that question.  Finally, I sometimes used it if I needed the students to learn a particular thing at a particular time (usually because they needed the info to make sense of a measurement technique or new equipment). This gave me a chance to model strong writing, and how to draw conclusions based on the accepted model.  It was good practice for me to draw only the conclusions that could be supported by my sources — not the conclusions that I “knew” to be true.

I tried to keep the tone conversational — similar to how I would talk if I was lecturing — and to expose my sense-making strategies, including the thoughts and questions I had as I read.

In class, I would distribute my paper and the rubrics.  Students would spend the class reading and writing me some feedback.  I would circulate, answering questions or helping with reading comprehension.  I would collect the feedback and use it to prepare a second draft, exactly as they did.  If nothing else, it really sold the value of good technical writing.  The students often commented on writing techniques I had used, such as cutting out sections of a quote with ellipses or using square brackets to clarify a quote.

Reading student feedback on my presentations was really interesting.  I would collect their rubrics and use it to prepare a second draft.  The next day, I would discuss with them my answers and clarifications, and they would vote on whether to accept my ideas to the model.  At the beginning of the year they accepted them pretty uncritically, but by the end of the year I was getting really useful feedback and suggestions about how to make my model additions clearer or more precise.

I wish I had some student feedback to show you, but unfortunately I didn’t keep copies for myself.  Definitely something I will do this year.

How It’s Going

I’m pretty satisfied with this.  It might seem like writing all that feedback would be impossible, but it actually goes pretty quickly.

Plan for improvement: Insist on electronic copies.  Last year I gave the students the choice of emailing their file to me or making hard copies for everyone and bringing to class.  Because bringing hard copies bought them an extra 12 hours to work on it, many did that.  But being able to copy and paste my comments would help.  Just being able to type my comments is a huge time-saver (especially considering the state of my hand-writing).

The students benefit tremendously from the writing practice, the thinking practice and, nothing to sneeze at, the “using a word-processor correctly” practice.  They also benefit from the practice at “giving critical feedback in a respectful way,” including to the teacher (!), and “telling someone what is strong about their work, not just what is weak.” If their writing is pretentious, precious, or unnecessarily long, their classmates will have their heads.  And, reading other students’ writing makes them much more aware of their own writing habits and choices.

I’m not grading the presentation, so I don’t have to waste time deliberating about the grade, or whether it’s “good enough.”  I just read it and respond, in a fairly conversational way.  It’s a window into my students’ thinking that puts zero pressure on me, and very little pressure on the students — it’s intellectually stimulating, I don’t have to get to every single student between 9:25 and 10:20, and I can do it over an iced coffee on a patio somewhere.  I won’t lie — it’s a lot of work.  But not as much work as grading long problem sets (like I did in my first year), way more interesting, and with much higher dividends.

Resources

MS Word template students used for their papers

Rubric students used for writing feedback.  Practically identical but formatted for hand-written comments

Last month, I was asked to give a 1hr 15 min presentation on peer assessment to a group of faculty.  It was part of a week-long course on assessment and evaluation.  I was pretty nervous, but I think I managed to avoid most of the pitfalls. The feedback was good and I learned a lot from the questions people asked.

Some Examples of Feedback

“Hopefully by incorporating more peer assessment for the simple tasks will free up more of my time to help those who really need it as well as aiding me in becoming more creative instead of corrective”

“You practiced what you were preaching”

“The forms can be changed and used in my classes”

“Great facilitator — no jargon, plain talk, right to the point! Excellent.  Very useful.”

“You were great! I like you! Good job! (sorry about that)  :)”

“Although at first, putting some of the load on the learner may seem lazy on the part of the instructor, in actual fact, the instructor may then be able to do even more hands on training, and perhaps let thier creativity blossom when unburdened by “menial tasks”.”

“Needed more time”

“Good quality writing exercise was a bit disconnected”

“Finally a tradeswoman who can relate to the trades”

In a peer assessment workshop, participants’ assessments of me have the interesting property of also assessing them.  The comments I got from this workshop were more formative than I’m used to — there were few “Great workshop” type comments, and more specific language about what exactly made it good.  Of course, I loved the humour in the “You were great” comment shown above —  if someone can parody something, it’s pretty convincing evidence of understanding.  I also loved the comment about before-thinking and after-thinking, especially the insight into the fear of being lazy, or being seen as lazy.

Last but not least, I got a lot of verbal and non-verbal feedback from the tradespeople in the room.  They let me know that they were not used to seeing a tradesperson running the show, and that they really appreciated it.  It reinforced my impressions about the power of subtle cues that make people feel welcome or unwelcome (maybe a post for another day).

Outline

  1. Peer assessment is a process of having students improve their work based on feedback from other students
  2. To give useful feedback, students will need clear criteria, demonstrations of how to give good feedback, and opportunities for practice
  3. Peer assessment can help students improve their judgement about their own work
  4. Peer assessment can help students depend less on the teacher to solve simple problems
  5. Good quality feedback should include a clear statement of strengths and weaknesses, give specific ideas about how to improve, and focus on the student’s work, not their talent or intelligence
  6. Feedback based on talent or intelligence can weaken student performance, while feedback based on their work can strengthen it

I distributed this handout for people to follow.  I used three slides at the beginning to introduce myself (via the goofy avatars shown here) and to show the agenda.

 

I was nervous enough that I wrote speaking notes that are almost script-like.  I rehearsed enough that I didn’t need them most of the time.

 

Avoiding Pitfall #1: People feeling either patronized or left behind

I started with definitions of evaluation and assessment, and used flashcards to get feedback from the group about whether my definitions matched theirs.  I also gave everyday examples of assessment (informal conversations) and evaluation (quizzes) so that it was clear that, though the wording might sound foreign, “evaluation” and “assessment” were everyday concepts.  There were definitely some mumbled “Oh! That’s what they meant” comments coming from the tables, so I was glad I had taken a few minutes to review.  At the same time, by asking people if my definitions agreed with theirs, I let them know that I knew they might already have some knowledge.

Participants’ Questions

After introducing myself and the ideas, I asked the participants to take a few minutes to write if/how they use peer assessment so far, and what questions they have about peer assessment.  Questions fell into these categories:

  • How can I make sure that peer assessment is honest and helpful, not just a pat on the back for a friend, or a jab at someone they don’t like, or lashing out during a bad day?
  • What if students are too intimidated/unconfident to share their work with their peers?  (At least one participant worried that this could be emotionally dangerous)
  • Why would students buy in — what’s in it for the assessor?
  • When/for what tasks can it be used?
  • Logistics: does everyone participate?  Is it required? Should students’ names be on it?  Should the assessment be written?
  • How quick can it be?  We don’t have a lot of time for touchy-feely stuff.
  • Can this work with individualized learning plans, where no two students are at the same place in the curriculum?

Is Peer Assessment Emotionally Safe?

I really didn’t see these questions coming.  I was struck by how many people worried that peer assessment could jeopardize their students’ emotional well-being.  That point was raised by participants ranging from School of Trades to the Health & Human Services faculty.

It dawned on me while I was standing there that for many people, their only experience of peer assessment is the “participation” grade they got from classmates on group projects, so there is a strong association with how people feel about each other.  I pointed that out, and saw lots of head nodding.

Then I told them that the kind of peer assessment I was talking about specifically excluded judging people’s worth or discussing the reviewer’s feelings about the reviewee.  It also wasn’t about group projects.  We were going to assess solder joints, and I had never seen someone go home crying because they were told that a solder joint was dirty.  It was not about people’s feelings.  It was about their work. 

I saw jaws drop.  Some School of Trades faculty actually cheered.  It really gave me pause.  In these courses, and in lots of courses about education, instructors encourage us to “reflect,” and assignments are often “reflective pieces.”  I have typically interpreted “reflect” to mean “assess” — in other words, analyze what went well, what didn’t, why, and what to do about it.  My emotions are sometimes relevant to this process, and sometimes not.  I wonder how other people interpret the directive to “reflect.”  I’m starting to get the impression that at least some people think that instructors require them to “talk about your emotions,” with little strategy about why, what distinguishes a strong reflection from a weak one, or what it is supposed to accomplish.

How to get honest peer assessments?

I talked briefly about helping students generate useful feedback.  One tactic that I used a lot at the beginning of the year was to collect all the assessments before I handed them to the recipient.  The first few times, I wrote feedback on the feedback, passed it back to the reviewer, and had them do a second draft (based on definite criteria, like clarity, consistency, causality).  Later, I might collect and read the feedback before giving it back to the recipient.  I never had a problem with people being cruel, but if that had come up, it would have been easy enough to give it back to the reviewer (and have a word with them).

Another way to lower the intimidation factor is to have everyone assess everyone.  This gives students an incentive to be decent and maybe a bit less clique-ish, since all their classmates will assess them in return.  It also means that, even if they get some feedback from one person that’s hard to take, they will likely have a dozen more assessments that are quite positive and supportive.

Students are reluctant to “take away points” from the reviewee, so it helps that this feedback does not affect the recipient’s grade at all.  It does, however, affect the reviewer’s grade; reviewing is a skill on the skill sheet, so they must complete it sooner or later.  Students are quick to realize that it might as well be sooner.   Also, I typically do this during class time, so I had a roughly 100% completion rate last year.

How to get useful peer assessments?

I went ahead with my plan to have workshop participants think about solder joints.  A good solder joint is shiny, smooth, and clean.  It has to meet a lot of other criteria too, but these three are the ones I get beginning students to focus on.  I showed a solder joint (you can see it in the handout) and explained that it was shiny and clean but not smooth.

Then I directed the participants to an exercise in the handout that showed 8 different versions of feedback for that joint (i.e. “This solder joint is shiny and clean, but not smooth”), and we switched from assessing soldering to assessing feedback.  I asked participants to work through the feedback, determining if it met these criteria:

  1. Identifies strengths and weaknesses
  2. Gives clear suggestion about what to do next time
  3. Focusses on the student’s work, not their talent or intelligence

We discussed briefly which feedback examples were better than others (the example I gave above meets criteria 1 and 3, but not 2).  This got people sharing their own ideas about what makes feedback good. I didn’t try to steer toward any consensus here; I just let people know if I understood their point or not.  Very quickly, we were having a substantive discussion about quality feedback, even though most people had never heard of soldering before the workshop.  I suggested that they try creating an exercise like this for their own classroom, as a way of clarifying their own expectations about feedback.

Avoiding Pitfall #2: This won’t work in my classroom

Surprisingly, this didn’t come up at all.

I came back often to the idea that there are things students can assess for each other and there are things they need us for.  I made sure to reiterate often that each teacher would be the best judge of which tasks were which in their discipline.  I also invited participants to consider whether a student could fully assess that task, or could they only assess a few of the simpler criteria?  Which criteria?  What must the students necessarily include in their feedback?  What must they stay away from, and how is this related to the norms of their discipline?  We didn’t have time to discuss this.  If you were a participant in the workshop and you’re reading this, I’d love to hear what you came up with.

Pitfall #3: Disconnected/too long

Well, I wasn’t able to avoid this.  After talking about peer assessments for soldering and discussing how that might generalize to other performance tasks, I had participants work through peer assessment for writing. I told them that their classmate Robin Moroney had written a summary of a newspaper article (which is sort of true — the Wall Street Journal published Moroney’s summary of Po Bronson’s analysis of Carol Dweck’s research), and asked them to write Robin some feedback.  They used a slightly adjusted version of the Rubric for Assessing Reasoning that I use with my students (summarize, connect to your own experience, evaluate for clarity, consistency, causality).  We didn’t really have time to discuss this, so Dweck’s ideas got lost in the shuffle, and I was only able to nod toward the questions we’d collected at the beginning, encouraging people to come talk afterwards if their questions hadn’t been fully answered.

Questions that didn’t get answered:

Some teachers at the college use an “individualized system of instruction” — in other words, it is more like a group tutoring session than a class.  The group meets at a specified time but each student is working at their own pace.  I didn’t have time to discuss this with the teacher who asked, but I wonder if the students would benefit from assessing “fake” student work, or past students’ work (anonymized), or the teacher’s work?

One teacher mentioned a student who was adamant that peer assessment violated their privacy, that only the teacher should  see it.  I never ran into this problem, so I’m not sure what would work best.  A few ideas I might try: have students assess “fake” work at first, so they can get the hang of it and get comfortable with the idea, or remove names from work so that students don’t know who they’re assessing.  In my field, it’s pretty typical for people to inspect each other’s work; in fields where that is true, I would sell it as workplace preparation.

We didn’t get a chance to flush out decision-making criteria for which tasks would benefit from peer assessment.  My practice has been to assign peer assessment for tasks where people are demonstrating knowledge or skill, not attitude or opinion.  Mostly, that’s because attitudes and opinions are not assessable for accuracy.  (Note the stipulative definitions here… if we are discussing the quality of reasoning in a student’s work, then by definition the work is a judgment call, not an opinion).  I suppose I could have students assess each other’s opinions and attitudes for clarity  — not whether your position is right or wrong, but whether I can understand what your position is.   I don’t do this, and I guess that’s my way of addressing the privacy aspect; I’d have to have a very strong reason before I’d force people to share their feelings, with me or anyone else.

Obviously I encourage students to share their feelings in lots of big and small ways.  In practice, they do — quite a lot.  But I can’t see my way clear to requiring it.  Partly it’s because that is not typically a part of the discipline we’re in.  Partly it’s because I hate it, myself.  At best, it becomes inauthentic.   The very prospect of forcing people to share their feelings seems to make them want to do it less.  It also devalues students’ decision-making about their own boundaries — their judgment about when an environment is respectful enough toward them, and when their sharing will be respectful toward others.  I’m trying to help them get better at making those decisions themselves — not make those decisions for them.  Talking about this distinction during peer assessment exercises gives me an excuse to discuss the difference between a judgment and an opinion.  Judgments are fair game, and must be assessed for good-quality reasoning.  Opinions are feelings are not.  We can share them and agree or disagree with them, but I don’t consider that to be assessment.

Finally, a participant asked about how to build student buy-in.  Students might ask, what’s in it for me?  What I’ve found is that it only takes a round or two of peer assessments for students to start looking forward to getting their feedback from classmates.  They read it voraciously, with much more interest than they read feedback from me.  In the end, people love reading about themselves.

I’ve been asked to give a presentation, on Tuesday, to a group of new-ish community college teachers.  Since so many of my ideas are stolen reused with the kind permission of various blog authors, I thought I’d put my ideas out there for comments, suggestions, warnings, or admonishments…

The Audience

The workshop is part of a week-long course called Assessing and Evaluating Adult Learners that is mandatory for all new faculty at my school.  The participants will have zero, one, or at most two years of teaching experience.  Remember, they’re like me: no ed school degree, maybe no university degree.  Our school is not what the US considers a “community college” — Canada has no such thing as an Associate degree.  Our school offers one and two-year programs that range from plumbing to culinary arts to nail-care technician to office administration.  New faculty are hired based on their experience in the trade; for example, when I started teaching three years ago, I left a position as a sea-going design tech with the Canadian Coast Guard.

So we get hired, deal with the culture shock of leaving industry for an educational institution and, if we’re lucky, we have a summer to get organized.  That’s when people a do a little planning and take some of these week-long courses.  If we’re less lucky (like I was), we’re hired one day, in a classroom the next, and our unbelievably dedicated co-workers hold us up until the following summer when we can finally take a deep breath and get organized for the next go-around.  All new faculty are required to take 10 of these week-long courses within our first two years of employment.

The Workshop

I finished my 10 credits last summer; regular readers will be unsurprised that the facilitators marked me as an obsessive assessment geek.  They have asked me to offer a one-hour workshop about peer assessment.

Here are some of the ideas I have for the agenda.

0. Intros

I’ll ask each participant to introduce themselves with their name, program, experience using peer assessment, and any questions they have.  I’ll talk about the goals for the hour and the agenda.

1. What is peer assessment, and are you doing it already?

There are lots of simple or informal ways this can happen.  I’ll give examples and definitions, and explain my assumptions about terminology.  I’ll also explicitly ask whether they’ve used peer assessment.

Examples:

  • Students inspect each other’s work in a shop class
  • Students compare and discuss their math assignment before handing it in
  • A student helps their classmate troubleshoot a lab that’s not working

2. Why use peer assessment?

Before:

  • I’d give tons of feedback on assignments, and students didn’t read it, or didn’t use it
  • Some students spent their shop time running to me every five minutes asking, “is this good?”
  • Some students couldn’t figure out when they were finished, or whether their work was good, or even what question to ask, so they kept fiddling with it endlessly instead of moving on to the next task
  • Students would hand in work without looking at the rubric
  • Students were afraid to try things that were unfamiliar

After:

  • Peer-assessment helps students self-assess
  • More peer assessement and better self-assessment means that teacher-assessment can be focussed where it’s really needed

3.  What is good-quality assessment?

  1. Contains a specific diagnosis about what is well done and what should be improved
  2. Contains specific ideas about how to improve
  3. Given at the time that something can be done about it
  4.  Focuses on the student’s work, not their talent or intelligence

4. Practice: peer assessment of a performance task

I need a skill that’s simple and that we can all discuss together.  Since there is no clear overlap in our expertise, I’m planning to use a task that my students learn at the beginning of the program: how to inspect a solder joint, using a 3-point scale (smooth, shiny, clean).  This may be a mistake — it will increase cognitive load and threatens to bore anyone who feels alienated from “hands-on,” “skilled-trades” focussed concepts.  On the other hand, generic tasks like “riding a bike” can strike me as contrived and condescending.  I’ve got lots of slides of microscope close-ups of solder joints; I’ll show one, explain the rubric, and write some feedback, possibly using Jason Buell’s “sentence frames.”  I’ll have the participants assess my feedback on the 4-point scale above.  Then I’ll write some bad feedback, and ask them to improve it.

5. Practice: peer assessment of a writing task

For this, I’ll give them a short reading (probably The Praise A Child Should Never Hear, based on Carol Dweck’s research).  I’ll ask them to write feedback to the author, using the rubric for assessing reasoning that I’ve been using with my students. It asks readers to assess clarity, coherence, and cause.  It will probably need to be tweaked a bit so that it doesn’t refer to a physical model.

6. Review, questions

I’ll take questions and review the ideas that came up during the intros.  The handout package will contain some notes, examples of the worksheets (including extra copies of the rubric for assessing reasoning), a list of links and resources for further reading, as well as an evaluation sheet.  I’m experimenting with a new format of evaluation, cribbed from WillAtWorkLearning.  The draft so far is here.

The Booby-Traps

Differentiation

It can be hard sometimes to set a respectful tone in such a short time.  Some teachers will be brand new and have no experience to draw on, not even student teaching or practice teaching or what have you (remember, they’re coming straight from a professional kitchen, not ed school).  Others will have a couple of years under their belt, and be frustrated that I appear to be explaining peer assessment as if they’re not doing it already.  The only thing I can think of here is to ask at the beginning who is using it already and how.  That should help me gauge how much I can draw on them to share their experience, and let them know that at least I’m not assuming no one else has ever heard of this before.

As Shawn Cornally puts it:

I’m a huge douche when it comes to thinking I know what someone is about to say. I always think I do because the language of teaching is so plural. I need to work on that, I bet people think I’m mean. Or, stated another way: If you think you’re already “doing” every new idea, pedagogy, and assessment strategy, you’re probably not, and you may be douchey, like me.

That Won’t Work In My Classroom

I’ve never given a workshop for teachers before.  But I’ve attended lots of them, some crushingly awful.   (To be fair, presentations in general are often crushingly awful).  I fear this:

Some majority percentage of them was watching and waiting only for one moment. They were waiting for the one phrase or condition or fragment that would allow them to write the whole idea off. They wanted the excuse to say, “That wouldn’t fly in my class.”

(credit: Dan Meyer)

I suspect that the likely source of that sentiment is something like “the students don’t know enough to do that yet.”  I’m trying to address that by showing explicitly the decision-making process of what feedback I can reasonably expect my students to give, and what I can’t.  I’m focussing on the idea that feedback doesn’t have to be about correctness.  If it is about correctness, it doesn’t have to be about completeness. Peer assessment can take some of the routine feedback off of teachers’ hands and put it in students’ hands.  That leaves more teacher time for the things that students truly can’t do (yet).

Shawn again:

Teachers want to be validated as professional educators and content knowledge specialists. This need comes out during discussions and can often be very repetitive.

I hope that distinguishing between feedback students can give and feedback teachers are needed for can alleviate this a bit.

I’m also taking pointers from Dan on this one: rehearsal, jokes about whiskey, frequent nods to all subject areas,working through examples of how to use peer assessment with both writing tasks and performance tasks.  That leaves me drawing a blank about how to deal with this:

Even two years into teaching… I was so comfortable, cocky, and sure of my methods I would find any way to dismiss a good suggestion.

(Dan again.)

The irony is not lost on my that I’m two years into teaching (at this school) and cocky enough to get up and pretend to tell someone else how to teach.

No Through-Line

The points seem disconnected.  They’re about peer assessment but I get the feeling they don’t hang together.

Too Much Stuff

This is probably too much for a 1hr presentation.  I could let participants choose which of the two feedback methods they wanted to experiment with to gain some time.

Got any other suggestions?  Fire away!

Archives

I’M READING ABOUT