You are currently browsing the category archive for the ‘Grades are not the point’ category.

Last month, I was asked to give a 1hr 15 min presentation on peer assessment to a group of faculty.  It was part of a week-long course on assessment and evaluation.  I was pretty nervous, but I think I managed to avoid most of the pitfalls. The feedback was good and I learned a lot from the questions people asked.

Some Examples of Feedback

“Hopefully by incorporating more peer assessment for the simple tasks will free up more of my time to help those who really need it as well as aiding me in becoming more creative instead of corrective”

“You practiced what you were preaching”

“The forms can be changed and used in my classes”

“Great facilitator — no jargon, plain talk, right to the point! Excellent.  Very useful.”

“You were great! I like you! Good job! (sorry about that)  :)”

“Although at first, putting some of the load on the learner may seem lazy on the part of the instructor, in actual fact, the instructor may then be able to do even more hands on training, and perhaps let thier creativity blossom when unburdened by “menial tasks”.”

“Needed more time”

“Good quality writing exercise was a bit disconnected”

“Finally a tradeswoman who can relate to the trades”

In a peer assessment workshop, participants’ assessments of me have the interesting property of also assessing them.  The comments I got from this workshop were more formative than I’m used to — there were few “Great workshop” type comments, and more specific language about what exactly made it good.  Of course, I loved the humour in the “You were great” comment shown above —  if someone can parody something, it’s pretty convincing evidence of understanding.  I also loved the comment about before-thinking and after-thinking, especially the insight into the fear of being lazy, or being seen as lazy.

Last but not least, I got a lot of verbal and non-verbal feedback from the tradespeople in the room.  They let me know that they were not used to seeing a tradesperson running the show, and that they really appreciated it.  It reinforced my impressions about the power of subtle cues that make people feel welcome or unwelcome (maybe a post for another day).

Outline

  1. Peer assessment is a process of having students improve their work based on feedback from other students
  2. To give useful feedback, students will need clear criteria, demonstrations of how to give good feedback, and opportunities for practice
  3. Peer assessment can help students improve their judgement about their own work
  4. Peer assessment can help students depend less on the teacher to solve simple problems
  5. Good quality feedback should include a clear statement of strengths and weaknesses, give specific ideas about how to improve, and focus on the student’s work, not their talent or intelligence
  6. Feedback based on talent or intelligence can weaken student performance, while feedback based on their work can strengthen it

I distributed this handout for people to follow.  I used three slides at the beginning to introduce myself (via the goofy avatars shown here) and to show the agenda.

 

I was nervous enough that I wrote speaking notes that are almost script-like.  I rehearsed enough that I didn’t need them most of the time.

 

Avoiding Pitfall #1: People feeling either patronized or left behind

I started with definitions of evaluation and assessment, and used flashcards to get feedback from the group about whether my definitions matched theirs.  I also gave everyday examples of assessment (informal conversations) and evaluation (quizzes) so that it was clear that, though the wording might sound foreign, “evaluation” and “assessment” were everyday concepts.  There were definitely some mumbled “Oh! That’s what they meant” comments coming from the tables, so I was glad I had taken a few minutes to review.  At the same time, by asking people if my definitions agreed with theirs, I let them know that I knew they might already have some knowledge.

Participants’ Questions

After introducing myself and the ideas, I asked the participants to take a few minutes to write if/how they use peer assessment so far, and what questions they have about peer assessment.  Questions fell into these categories:

  • How can I make sure that peer assessment is honest and helpful, not just a pat on the back for a friend, or a jab at someone they don’t like, or lashing out during a bad day?
  • What if students are too intimidated/unconfident to share their work with their peers?  (At least one participant worried that this could be emotionally dangerous)
  • Why would students buy in — what’s in it for the assessor?
  • When/for what tasks can it be used?
  • Logistics: does everyone participate?  Is it required? Should students’ names be on it?  Should the assessment be written?
  • How quick can it be?  We don’t have a lot of time for touchy-feely stuff.
  • Can this work with individualized learning plans, where no two students are at the same place in the curriculum?

Is Peer Assessment Emotionally Safe?

I really didn’t see these questions coming.  I was struck by how many people worried that peer assessment could jeopardize their students’ emotional well-being.  That point was raised by participants ranging from School of Trades to the Health & Human Services faculty.

It dawned on me while I was standing there that for many people, their only experience of peer assessment is the “participation” grade they got from classmates on group projects, so there is a strong association with how people feel about each other.  I pointed that out, and saw lots of head nodding.

Then I told them that the kind of peer assessment I was talking about specifically excluded judging people’s worth or discussing the reviewer’s feelings about the reviewee.  It also wasn’t about group projects.  We were going to assess solder joints, and I had never seen someone go home crying because they were told that a solder joint was dirty.  It was not about people’s feelings.  It was about their work. 

I saw jaws drop.  Some School of Trades faculty actually cheered.  It really gave me pause.  In these courses, and in lots of courses about education, instructors encourage us to “reflect,” and assignments are often “reflective pieces.”  I have typically interpreted “reflect” to mean “assess” — in other words, analyze what went well, what didn’t, why, and what to do about it.  My emotions are sometimes relevant to this process, and sometimes not.  I wonder how other people interpret the directive to “reflect.”  I’m starting to get the impression that at least some people think that instructors require them to “talk about your emotions,” with little strategy about why, what distinguishes a strong reflection from a weak one, or what it is supposed to accomplish.

How to get honest peer assessments?

I talked briefly about helping students generate useful feedback.  One tactic that I used a lot at the beginning of the year was to collect all the assessments before I handed them to the recipient.  The first few times, I wrote feedback on the feedback, passed it back to the reviewer, and had them do a second draft (based on definite criteria, like clarity, consistency, causality).  Later, I might collect and read the feedback before giving it back to the recipient.  I never had a problem with people being cruel, but if that had come up, it would have been easy enough to give it back to the reviewer (and have a word with them).

Another way to lower the intimidation factor is to have everyone assess everyone.  This gives students an incentive to be decent and maybe a bit less clique-ish, since all their classmates will assess them in return.  It also means that, even if they get some feedback from one person that’s hard to take, they will likely have a dozen more assessments that are quite positive and supportive.

Students are reluctant to “take away points” from the reviewee, so it helps that this feedback does not affect the recipient’s grade at all.  It does, however, affect the reviewer’s grade; reviewing is a skill on the skill sheet, so they must complete it sooner or later.  Students are quick to realize that it might as well be sooner.   Also, I typically do this during class time, so I had a roughly 100% completion rate last year.

How to get useful peer assessments?

I went ahead with my plan to have workshop participants think about solder joints.  A good solder joint is shiny, smooth, and clean.  It has to meet a lot of other criteria too, but these three are the ones I get beginning students to focus on.  I showed a solder joint (you can see it in the handout) and explained that it was shiny and clean but not smooth.

Then I directed the participants to an exercise in the handout that showed 8 different versions of feedback for that joint (i.e. “This solder joint is shiny and clean, but not smooth”), and we switched from assessing soldering to assessing feedback.  I asked participants to work through the feedback, determining if it met these criteria:

  1. Identifies strengths and weaknesses
  2. Gives clear suggestion about what to do next time
  3. Focusses on the student’s work, not their talent or intelligence

We discussed briefly which feedback examples were better than others (the example I gave above meets criteria 1 and 3, but not 2).  This got people sharing their own ideas about what makes feedback good. I didn’t try to steer toward any consensus here; I just let people know if I understood their point or not.  Very quickly, we were having a substantive discussion about quality feedback, even though most people had never heard of soldering before the workshop.  I suggested that they try creating an exercise like this for their own classroom, as a way of clarifying their own expectations about feedback.

Avoiding Pitfall #2: This won’t work in my classroom

Surprisingly, this didn’t come up at all.

I came back often to the idea that there are things students can assess for each other and there are things they need us for.  I made sure to reiterate often that each teacher would be the best judge of which tasks were which in their discipline.  I also invited participants to consider whether a student could fully assess that task, or could they only assess a few of the simpler criteria?  Which criteria?  What must the students necessarily include in their feedback?  What must they stay away from, and how is this related to the norms of their discipline?  We didn’t have time to discuss this.  If you were a participant in the workshop and you’re reading this, I’d love to hear what you came up with.

Pitfall #3: Disconnected/too long

Well, I wasn’t able to avoid this.  After talking about peer assessments for soldering and discussing how that might generalize to other performance tasks, I had participants work through peer assessment for writing. I told them that their classmate Robin Moroney had written a summary of a newspaper article (which is sort of true — the Wall Street Journal published Moroney’s summary of Po Bronson’s analysis of Carol Dweck’s research), and asked them to write Robin some feedback.  They used a slightly adjusted version of the Rubric for Assessing Reasoning that I use with my students (summarize, connect to your own experience, evaluate for clarity, consistency, causality).  We didn’t really have time to discuss this, so Dweck’s ideas got lost in the shuffle, and I was only able to nod toward the questions we’d collected at the beginning, encouraging people to come talk afterwards if their questions hadn’t been fully answered.

Questions that didn’t get answered:

Some teachers at the college use an “individualized system of instruction” — in other words, it is more like a group tutoring session than a class.  The group meets at a specified time but each student is working at their own pace.  I didn’t have time to discuss this with the teacher who asked, but I wonder if the students would benefit from assessing “fake” student work, or past students’ work (anonymized), or the teacher’s work?

One teacher mentioned a student who was adamant that peer assessment violated their privacy, that only the teacher should  see it.  I never ran into this problem, so I’m not sure what would work best.  A few ideas I might try: have students assess “fake” work at first, so they can get the hang of it and get comfortable with the idea, or remove names from work so that students don’t know who they’re assessing.  In my field, it’s pretty typical for people to inspect each other’s work; in fields where that is true, I would sell it as workplace preparation.

We didn’t get a chance to flush out decision-making criteria for which tasks would benefit from peer assessment.  My practice has been to assign peer assessment for tasks where people are demonstrating knowledge or skill, not attitude or opinion.  Mostly, that’s because attitudes and opinions are not assessable for accuracy.  (Note the stipulative definitions here… if we are discussing the quality of reasoning in a student’s work, then by definition the work is a judgment call, not an opinion).  I suppose I could have students assess each other’s opinions and attitudes for clarity  — not whether your position is right or wrong, but whether I can understand what your position is.   I don’t do this, and I guess that’s my way of addressing the privacy aspect; I’d have to have a very strong reason before I’d force people to share their feelings, with me or anyone else.

Obviously I encourage students to share their feelings in lots of big and small ways.  In practice, they do — quite a lot.  But I can’t see my way clear to requiring it.  Partly it’s because that is not typically a part of the discipline we’re in.  Partly it’s because I hate it, myself.  At best, it becomes inauthentic.   The very prospect of forcing people to share their feelings seems to make them want to do it less.  It also devalues students’ decision-making about their own boundaries — their judgment about when an environment is respectful enough toward them, and when their sharing will be respectful toward others.  I’m trying to help them get better at making those decisions themselves — not make those decisions for them.  Talking about this distinction during peer assessment exercises gives me an excuse to discuss the difference between a judgment and an opinion.  Judgments are fair game, and must be assessed for good-quality reasoning.  Opinions are feelings are not.  We can share them and agree or disagree with them, but I don’t consider that to be assessment.

Finally, a participant asked about how to build student buy-in.  Students might ask, what’s in it for me?  What I’ve found is that it only takes a round or two of peer assessments for students to start looking forward to getting their feedback from classmates.  They read it voraciously, with much more interest than they read feedback from me.  In the end, people love reading about themselves.

The teacher’s skill sheet was a success (thanks, Dan).  Today was our third day with the first-year students, and my first time explaining skills-based-grading to an incoming class. Our reassessment period is Thursdays from 2:30 – 4:30, so in this morning’s shop class I dropped a skill sheet on their benches and we started using it.  By the time I started explaining how I grade this afternoon, they already had a skill signed off.

I handed out their skills folders and the first two skill sheets for DC circuits.  You should have seen their jaws drop when I explained that they can choose if, when, and how often they reassess.  They asked great questions and gave thoughtful answers.  We talked about how everyone progresses, the many ways of getting extra help, learning at your own pace, and the infinite ways of demonstrating improvement or proficiency.    They wanted to know what is proof of improvement (required when applying for reassessment), and had suggestions (quiz corrections, practice problems, written explanations).  They wanted to know what level 5 questions are, where to find some, and how to prevent them from getting too big.  Many of them had ideas in mind already and we bounced those around to see if they meet the criteria (at least two skills, and you have to choose the problem-solving approach yourself, so it can’t be the same as something we’ve done in class).

We talked about how and why you couldn’t get credit for level 4 until you’ve completed level 3.  I explained it in terms of employers’ expectations about basic skills.  One student explained it back to me in terms of “levelling up your character” in role-playing games.  We talked about feedback, from me and from themselves.  I gave examples of feedback that does and does not help you improve (“I need to figure out why V and I are different” compared to “I don’t get it.”).  We talked about how many points homework is worth (none).  My get-to-know-you survey tells me there are a lot of soccer players in the room, so we talked about practices and push ups.  “Do you get points in the league standings for showing up to practice?  What about for going to the gym?”  I asked.  Of course they said no.  “So why do it if it’s not worth points?”  They got this right away.  “It helps you win the game.”  “It makes you stronger.”

I enjoyed this conversation:

Student A: “So homework is just for learning.”

Me: “What are you talking about?  I thought homework was for sucking up to the teacher.”

Student B: “I thought so too.  That’s why I never did it.”

Student C: “I thought homework was for keeping kids in their homes at night.”

 

Once the questions had died down, I gave them a copy of a skills sheet that looks just like the ones I use to assess them, except that all the skills relate to my teaching.  I asked them to sign and date next to any items they had evidence that I had done.  I did this so I could find out if they really understood how to use the thing.  But it had unexpectedly positive side-effects.  From a quick glance, they could tell that I was going to get a “failing” grade.  It never occurred to me that they would be upset by this.

They had barely started reading when I started hearing gasps.  “You’re failing!” someone called out.  “Is our assessment of you going to affect your assessment of us?” someone else half-joked.  “Of course I’m not passing yet,” I replied reasonably.  “It’s the second day of class.  There’s no possible way I could have done 60% of my job by now.  That’s how it works: you start at 1, then you move up to 2.”  I walked around and peeked over shoulders to make sure they got the mechanics of what to fill in where.  I stopped a couple of times to talk to people who seemed to have overly generous assessments.  “How have I demonstrated that?” I asked.

We reviewed it together.  We got to practice technical reading in tiny, learning-outcome-sized pieces.  The highly condensed text on a skill sheet changes meaning if you miss a preposition.   Another unexpected side-effect: my students had noticed me doing things that I hadn’t noticed myself.  They had evidence to support most of their claims, too. There were a few that I disagreed with because I had only demonstrated part of the skill, and I modelled the kind of feedback that my “teacher” could have given me to help me improve.

Overall, they seemed very concerned about my feelings about “failing;” we calculated my current topic score at 0.5/5 and filled in the bar graph on the front of the skill sheet with today’s date.  I got a chance to model a growth mindset.  I made sure to let them see how proud I am of having achieved a 0.5 in only two days’ work, and mentioned that this is an improvement over two days ago, when I had a zero.  The usual running commentary of tongue-in-cheek jibes had a disarmingly earnest, reassuring tone.  “I know that you can improve your score the next time you reassess,” one student said.  Another student chimed in with “feel free to drop in to my office anytime if you want to get some feedback.”


As I get ready to launch into my second September, I’ve gone over the feedback from last spring.  If you’ve read since the beginning, you know that last December, half of my class was failing and  the rest were bored. There was a lot of “why do we have to learn this?” and “is this on the test?”

By the end of this semester, no one failed, and there were some remarkable changes in our classroom culture.  One of my colleagues said “when I check labs now, they show me which findings they think are important, instead of waiting for me to tell them what important things they should have found.”

I did some informal evaluations (I stole these questions from Robert Talbert at Casting Out Nines, and they worked well.)  I started getting feedback that sounded like this.

What do you like/dislike about the grading system?

Like: Keep trying skills until you understand it

I’ve actually grown pretty fond of the skill system.  I like that you actually make us demonstrate our knowledge of the individual skills, it actually helps me remember better sometimes, specially when going over quizzes.  The only thing I don’t like is that to get a skill checked off, mainly in the shop, it can take a long time.

The grading system works very well although I think using the skills for every aspect of the course is a little too flexible.  Using skills for the lab and going back to regular marked assignments.  I need more room, I will talk to you later.

Skills for quiz bad idea.  I had no ambition to study for test/quizzes.  I like the shop skills tho.

I dislike the unstructured feel of it, simply because I do better with the assignments/tests, but I do like the ability to retest on a skill if you don’t get it the first time.

Independent learning project was fantastic and incredibly valuable in the long run.

I really appreciate you trying something new, and already there is a huge improvement.  I hope you continue to innovate and improve the system.

I think the skills are very straight forward, they let us know exactly what you’re looking for.

It all encourages independence, which is great, unless you’re unmotivated.

There needs to be more communication.

Without marks to fuel my ego, I lost my drive to excel.

I think it helps focus more on the important stuff, and less on just completing useless lab stuff.

I was able to learn more with a smaller [work] load.  This gave me time to play and experiment, by approaching labs in a way that was helpful to me.

Yes.  It’s taken the good parts out of the lab book and made them easy to learn.

It certainly kept me on my toes to make sure that I understood what was needed to do the labs and the tests.

Yes.  Previously, I would be missing a small piece of the “puzzle,” this way I know what I need to do.

What do you LOVE about this course?

A lot more feedback this semester, understand concepts easier

The learning environment, the flexibility…

I love that I am actually doing well in this course…

The ability to work at your own pace (even though you have to remember not to procrastinate)

Hands-on

Designing my own labs

I feel that education has in general become stagnant, and I was delighted to have a teacher who was willing to try something new.  I know this takes courage and a lot of hard work.  Having 25% of my mark based on a project I was able to pick and have it graded in a way that suited me was a blast.

Electronics 🙂

All the freedom

The instruction and the easy feeling that one understands what is being taught.

I liked the independent learning project, even if I had been a bit too ambitious in my designs and dreams

What do you HATE about this course?

I wouldn’t say I hate anything really except there’s a lot of work sometimes.

Nothing.

Quizzes! don’t do well on them, if get one part wrong, all wrong

Other students asking questions on things we have already covered in class, then interrupting the instructor when trying to respond

If you could change ONE THING about this course, what would it be?

More level 5 questions on tests.  It is necessary to go above and beyond to get 100% on most modules.

Give assignment due dates.

More availability with students during lab time.

Harder deadlines, required milestones for the self-directed project

Level 5 questions: being bonus because sometimes difficult or busy time schedules to get one ready and do research

Include marked assignments somehow

To have a mix of skills and assignments

Points for homework so I’m more motivated to do it

More hands-on and practicing circuits

Any other comments about the course or the teacher?

Keep on getting better, you are doing a service to your students by furthering education.

I really enjoyed the year.  I just wish we had the skill program for the first year as well.

I like this semester better than last semester.  Keep up the good work!

Summary

My students are awesome, and almost as invested in developing me as I am in developing them.

Students really get reassessment.  Not a bad place to start when introducing the “sales pitch.”

They want more feedback, and they’re asking for it explicitly.  This is fantastic.  I require work samples as part of an application for reassessment now, so that should help.  I’ll also be experimenting with BlueHarvest.

Reassessment changed the concept of “studying.”  I think this is a good thing.  I suspect that what they mean by “study” is “do a long series of identical problems until you’ve got the procedure memorized,” and I’m ok with letting go of that.  At the same time, I need to spend more time helping them learn to test themselves, so that they’re not relying exclusively on my tests as a way to diagnose and learn.

It made them look hard at who they are, what they want, and why they do what they do.  I need to be ready for that.  Students probably could use some preparation for it too.

It exposed the squirming, seething reality of the differences between my expectations about teaching and their expectations about learning.  Dan Goldner’s got a great idea about how to clarify what the teacher’s job is, and I’m going to try it.

But hands-down the most fascinating thing that happened this past semester was that my students begged for homework.  Many interesting conversations ensued (post about this forthcoming).  Removing points for homework may have been the single most useful thing I did all year.  To be continued.

My partner’s 15-year-old daughter can place every element on a blank periodic table in under 6 minutes.   Her favourite YouTube video is a song about scientific experiments (see above).  She tells jokes about Heisenberg and Schrödinger.  And she doesn’t like physics.

It’s not that she doesn’t like the class or her classmates; she’s in Grade 10, where physics is just a unit in a semester-long course, and she doesn’t object to the other units.  It’s not that she doesn’t like the teacher (same rationale).  It’s not that she doesn’t like thinking hard, or tricky puzzles, or things that other kids find uncool.  As evidence, I submit that last fall she read Twelfth Night for fun, just because it was sitting on a coffee table; last weekend, she taught herself to play chess (which she knew absolutely nothing about) by losing to the computer and analyzing its moves; and she has been known to go to class wearing a tie, a fedora, and/or pi-day pins.

I asked her what they were working on in this “physics.”  Answer: displacement and velocity.  (Before you conclude that this itself is the problem, note that she was already dreading it before it started.)  She tells me she thinks the work is pointless, all they do is answer questions where the answer for distance is “10m” and the answer for displacement is “10m north.”  Over, and over, and over.

So I feed her some examples that illustrate the difference between distance and displacement, without exactly explaining (you walk around the block.  How far did you walk?  How far did you get?).  Over the course of the next two days, during quiet moments in other conversations, she pipes up with questions, all of which I avoid answering directly but encourage her to give me examples that explain her thinking.  “Can distance and displacement be different numbers?”  “Does that mean that distance and displacement will be different if you make any turns?”  “Can displacement ever be higher than distance?”  “Does that mean that velocity can never be higher than speed?”

She thinks about these things.  For fun.  Over Sunday brunch.  But she “doesn’t like” physics.

On her interim report card, she’s got 90s in everything (including math) except science, where she got a 77.  She’s excited to tell me about her grades, except that when she tells me about science class, she mumbles, looks away, and seems embarrassed.  She volunteers, “in physics, there’s a lot of formulas and math and graphs and stuff.  I’m hoping to bring my grades up next unit when we do chemistry.”

You know, where there aren’t so many graphs and formulas and math and stuff.

She’s a tough, persevering, open-minded, critical-thinking kid.  If she needs high school physics at some point, there are a bunch of ways to get it later, when it has a point for her.  I’m not actually worried.

I just wish I knew what to say.

A few people have asked about implementing the “2-copy quiz,”  so I thought I would write a bit about what I’m doing, what’s going well so far, and what I realize in hindsight I should have done differently.

Also, I want to say thanks and welcome to the new readers who’ve joined since that post was “Freshly-Pressed.”  I’m delighted that you’ve decided to stay. Don’t hesitate to comment on the older items if you are interested — none of these conversations are finished, by a long shot.

Backstory of the 2-Copy Quiz

I got intrigued by the idea of immediate feedback.  It’s easy with after-class make-up quizzes, and I was trying to figure out how to do it with in-class quizzes where a large group of people was likely to finish all at once.

1.  I could grade the quizzes and hand them back the next day

Too late — students have already forgotten why they wrote reactance when they should have thought about resistance.  Also, since the paper’s already graded, they know whether everything’s right or wrong.  It takes the question away.

2.  I could collect their work on one piece of paper, and they would still have the sheet of questions while we discuss the answers

Better, but still not what I want.  They will have forgotten the details of what they wrote and that’s where the devil is.  If I present the correct answers in a “clear, well illustrated way, students believe they are learning but they do not engage … on a deep enough level to realize that what was is presented differs from their prior knowledge.”  This is a quote from a video about superficial learning made by Derek Muller, of Veritasium science vlog fame.  Derek goes on to say that those misconceptions can be cleared up by “presenting students’ misconceptions alongside the scientific concepts.”  It was the alongside part I wanted. It’s not until their thoughts and their actions are suddenly brought into focus at the same time that they realize there is a contradiction.

3.  I could collect their papers, run to the staff room, photocopy them, and come back to review the answers.

And while I was gone, they squeezed all the burning curiosity out of their questions among themselves.  Which is what they normally do in the hallway.

So the conclusion followed: we needed two copies of the quiz.  One for me to grade later, one for them to keep while we reviewed the answers right away. One thing I like about this method is that it doesn’t interrupt the learning.  It actually removes an interruption that would normally happen (students having to walk out into the hall to talk about the test). By inviting the conversation into the classroom, I can be a part of it if that’s helpful, or I can organize the students into groups and get out of the way.

Goal: for students to assess the goodness of their answer

We often met this goal.  Using class time to discuss “rightness” directs their point-chasing energy toward the good judgement I want them to develop (would this be considered educational judo?).  If your students are like mine, they will stop at nothing to find out if they “got the right answer.”  Sometimes this makes me tired, what with the assumption that there’s a single right answer, and the other assumption that rightness is all that counts.  But then I realized that motivation is motivation, and I could probably teach them to jump through flaming hoops or walk on a bed of nails if I put those things between a student who’s just written a test and the “right answers.”

So I put some self-assessment in the way instead.  Their desire to “get the right answer” extends to their self-assessment, of course, but the conversations became more nuanced throughout the term.  At first there was a lot of “will you accept this answer” and “will you accept that answer.”  I tried to help them make inferences about whether an answer is good enough.  I also opened myself up to changing my definition of the right answer if they could substantiate their arguments for an alternate perspective.  Hell, alternate perspectives and substantiating their thinking are more important than whatever was on the quiz.  Later on in the term, I started hearing things like, “No, I don’t think this answer is good enough, it’s a true statement but it doesn’t answer the question,” or “I think this is too vague to be considered proof of this skill.”  They’d rather say it before I say it.  Which means I have to be really careful what language I use during this conversation.  They will repeat it.

Logistics

I expect the students to write feedback to themselves on their quiz paper.  It can be praise or constructive criticism, but there has to be something for each question.  They see the value of this later when they’re studying to reassess, but it’s a hard sell at first, and I realized after a few weeks that my students actually had no idea how to do it.  For a while, I collected their worksheets at the end of class to read and write back to them.  But I don’t pass back the answer sheets that I correct.  If they know that I’m going to give the answer and some feedback, it takes the responsibility off of them to do it for themselves.

What worked well

  1. It’s easy and cheap. Just print off 2 quiz papers for every student, and have them fill out both.
  2. It’s flexible. You could have them make two full copies of their work.  You could ask them to make a full copy for themselves and an answer copy for the teacher (my tactic at the moment).  You could ask them to make an answer copy for the teacher, and some rough notes for themselves so they can remind themselves of their thinking (what my students actually do).
  3. In keeping with the idea of going with the flow of the learning, I let the class direct the questioning.  There’s no reason we have to review the first question first.  Often there’s one question that everyone is dying to know the answer to, so we talk about that one.
  4. I get an instant archive of student work.  Good for preparing my lesson plans next year, reconstituting my gradebook when a computer crashes, turning over the course to another instructor, submitting documentation to accrediting agencies, etc. etc.

What didn’t always work well

  1. It’s time-consuming to have to copy things to another page.  For numerical answers, it’s pretty easy to copy the final answer, but then you can’t see their work.  For short-answer/essay questions, it’s going to get seriously annoying for students to copy them in full to another page (I make them do it anyway).  Multiple-choice is pretty painless, but it’s a pain to feel limited to one kind of question.
  2. Students don’t always see the value of having their own copy, so they fill out my copy and leave theirs blank.  See Backstory #2 above.
  3. Students don’t always see the value of showing their work, so they fill out two copies with nothing but answers.  See Backstory #2 above.
  4. Students don’t always see the value of assessing their work at all.  The teacher is going to decide the final grade, and the teacher might disagree with their self-assessment, so why not just wait and let “the experts” make the judgement call.
  5. Students don’t always see the value of writing feedback to themselves.
  6. Students sometimes have no idea how to write feedback to themselves.

I struggled with the attitude of “wait for the teacher to decide if it’s good enough.”  I should have made it clearer that improving their ability to evaluate their answers was the point, not a side-effect.  I deliberately held off updating my online gradebook, so that they had to depend on themselves to track their skills (just got my student evals back today and my “poor tracking” of their grades is the #1 complaint).  It’s said best by Shawn Cornally from Think Thank Thunk: “I am not your grade’s babysitter.”  In fact I sometimes wondered if I should stop using the online gradebook altogether.  Yes, sometimes I disagree with their self-assessment; that’s why it’s important for them to take part in the group discussion after the quiz.  That’s where I discuss what I’m looking for in an answer and help them figure out if they’ve provided it.  This is hard on them, and makes them feel insecure, for lots of reasons, and I need to keep thinking about it.

One reason is that writing feedback is something I realized (a bit late) that I had to teach.  I did this in a hurry and without the scaffolding it deserved.  Kelly O’Shea of Physics! Blog! broke it down for me:

How often do you think they’ve practiced the skill of consciously figuring out what caused them to make a mistake? How often do we just say, “That’s okay, you’ll get it next time.” instead of helping them pick out what went wrong? My guess is that they might not even know how to do it.

What’s Next

  • I’m still not sure how to teach them to create feedback for themselves, but it goes to the top of the pile of things to introduce in September next year, not February.
  • I’m toying with the idea that the students should keep an online gradebook updated.  Then I could check up on their scoring (and leave them some feedback about it), instead of them checking up on my scoring, and being annoyed that it’s not posted yet.  Not sure logistically how to do this. (Edit: ActiveGrade is already working on this)
  • A portable scanner.  For $300 I could solve Didn’t-Work #1, 2, and 3.  Just scan their quiz papers as they finish.  Makes it extra-easy for me to annotate the electronic copy and maybe make a screencast for a particular student, if warranted.  Saves trees, too.

Update, July 29, 2011: If you already own a smartphone, the portable scanner is free, and it’s called CamScanner.

In the “why didn’t I think of this before” category: my students now grade their own quizzes. No, they don’t get to give themselves whatever mark they want.  Here’s how it goes.

I used to learn a ton from going over their tests with a fine toothed comb, trying to figure out what they were having trouble with.  It finally dawned on me that I didn’t need that learning opportunity nearly as much as my students did. So my students now do quizzes like this.

  1. Write quiz.
  2. Copy all answers to the provided answer sheet.
  3. Hand in answer sheet to teacher.  Keep question sheet with all your work on it!
  4. Twiddle thumbs for a few minutes waiting for everyone to finish.  Wish you could ask someone “what did you get for #5?”
  5. All papers are in.  Burst at the seams and ask “what’s the answer to 5!?”
  6. Participate in class or small-group discussion of questions and answers.  Compare your problem-solving approach to others’.  Figure out what you did right.  Figure out what you did wrong.  Make notes about how to do #5 differently, since now that you’ve found your misunderstandings or found new ways to tackle the question, you’re already planning to reassess next Wednesday.
  7. Put checkmarks on skill sheet.

I love this because it allows the students to go over the problems immediately after the test, during those 7 minutes of burning curiosity, yet still have their own test paper in front of them.  They remember what they did and why they did it (by tomorrow it’ll be gone into the ether of “oh, just a careless mistake” or “I understand it now”).  The downside is that they have to copy their answers from the question sheet to the answer sheet, which can take time.  I collect the answer sheets before the discussion/review, grade them on a complete/incomplete basis, and update my gradebook.

Because each person needs to know whether their answer meets the standard, they share.  This gets into a great discussion of the many possible right answers.  If I hand the tests back already graded, there’s no incentive to share.  Downside: you must risk speaking up and exposing a possibly flawed answer.  Upside: everyone else is doing it.  Sometimes the top students get things wrong, and take a good-natured drubbing, and it becomes more clear that smartness isn’t a magical quality that enables you to skip the “learning” phase of the learning.

Every once in a while, I collect the work sheets after we’re done reviewing.  It lets me see how they’re doing with grading their tests and writing feedback to themselves.  It also lets me have a look at the common misconceptions and confusions (although mostly I collect in-class work for that kind of intel).

Unexpected discovery: their negative self-talk shows up in their corrections.  I was blown away by the brutality of the things they were writing to themselves.  (“Stop being a moron!”  “Stupid, stupid, stupid!”).  Collecting the work sheets gives me a chance to write back to them, try to interrupt negative self-talk, and do some coaching about self-assessment.

When they request reassessments, the web form I use has a spot for “What have you learned about this skill that you didn’t know before.”  The answers there are almost as enlightening, and have evolved from “I learned that I am stupid” to “I learned that capacitors in parallel do not get added up if they have been converted to reactance.”  All of these things become the beginnings of conversations.

The first-year students made this photo-collage of themselves at the end of last semester.  What’s cool about this isn’t the stuff they’re doing in the photos (well, that was great, but.)  What moved me about this was that they took the photos themselves.  That may not seem earth-shattering to anyone who’s seen 20-somethings share inappropriate photos with hundreds of acquaintances.  But it’s important here because, at the start of the semester, many of these students were so cool that they could frost up a whiteboard from the back row of desks.

Always Formative wrote that “the thing that’s always appealed to me about [using a portfolio to collect examples of your work] is having students self-select what he or she perceives as quality. Developing the [skill] of self-evaluation is probably the most important thing I want a student to get.”  But that requires students to dare to be proud.  At the beginning of the year, they wouldn’t have been caught dead admitting that school was interesting or fun, much less documenting in video that they felt strongly about their craftsmanship.  Can you imagine a student posting photos of their homework to Facebook?

D-shell connector soldered by student

D-shell connector soldered by a student

Fifteen weeks later, they take turns holding the camera while someone mugs with some small thing that they made with their own hands.  If you’re concerned that services like Animoto make the wrong things easy, I mostly agree with you.  Except that in this case, digital storytelling was not the difficult skill that the students were trying to master.  Cracking out of their protective shell of indifference was orders of magnitude more painful and challenging.  They watched themselves in this video over and over.  I watched them as they watched it.  They were seeing themselves as skilled, in a way that wasn’t real until they saw it from the outside.  All of us were proud.

So far when I’ve introduced self-directed or self-initiated activities to my class, students have reacted with some combination of

  • Shock
  • Denial
  • Strong emotion
  • Resistance
  • Surrender
  • Struggle

If it goes well, and we’re able to make it through the storm, we might get to

  • Confidence
  • Integration

I’ve been thinking a lot of about Navigating the Bumpy Road to Student-Centered Instruction by Richard Felder and Rebbeca Brent.

The list of stages above is the one they use — pulled straight out of psychological research about dealing with trauma and grief.

The rest of the article has precautions and strategies in question and answer format for “smoothing out the bumps.”

If you’ve seen this in your classroom, how have you handled it?  If you are thinking of changing your curriculum toward self-direction, what do you think of the authors’ suggestions?  What exactly is it that students are grieving for?

If you want to know how it’s working for my students so far, check out their most recent test scores in my first post.  I’d score myself at a 3/5 on my own proposed rubric, so far.  But don’t worry — I have plans to reassess myself in the spring.

Ok, gotta take a break from the course design marathon to enjoy this:

“25 students, aged eight to ten years old have become the youngest scientists ever to be published in the prestigious Royal Society journal Biology Letters.

Their findings report that buff-tailed bumblebees can learn to recognize nourishing flowers based on colors, patterns and spatial relationships… ‘This experiment is important, because, as far as we know, no one in history (including adults) has done this experiment before.’ Also, ‘It tells us that bees can learn to solve puzzles (and if we are lucky we will be able to get them to do Sudoku in a couple of years’ time).’ ”

Dear kids of Blackawton Elementary School, I bow before your superior awesomeness.

Archives

I’M READING ABOUT