You are currently browsing the category archive for the ‘Grades are not the point’ category.
On Exploring RC Circuits and trying to figure out why the capacitor charges faster than it discharges
Student 1: “Is the charge time always the same as the discharge time?”
Me: “According to this model, it is, if the resistance and capacitance haven’t changed.”
Student 2: “I’ve got data where the charge time was short and the discharge time was long.”
Me: “Why would a reasonable teacher say something that contradicts your data?”
Student 3, excitedly: “What circuit was it? Was there anything else in the circuit?”
Student 1: “I can’t remember what it was called — it had a resistor, a capacitor, and a diode.”
Student 2: “That’s it then! The diode — it’s changing its resistance!”
Student 1: “Yes — it goes from acting like a short to acting like an open. Thanks for bringing that up [Classmate's Name] — I just answered a HUGE question from that lab!”
Student services counsellor who sat in for a day
“You’re challenging my whole idea about science.”
While exploring why capacitors act like more and more resistance as they charge
“Maybe the negative side of the cap is filling up with electrons, which means less capacitance. According to the ‘tau model’, charge time = 5 * R * C. So if the charge time never changes, and the capacitance is going down, then the resistance must be going up.”
[I'm excited about this because, although it shows a misunderstanding of the definition of capacitance, the student is tying together a lot of new ideas. They are also using proportional reasoning and making sense of the story behind a formula. I need a better way to help students feel proud of things like this...]
Student critique of a Wikipedia page
“There’s some great begging the question, right there!”
Student analyzing the mistake in their thinking about a resistor-diode circuit
“I didn’t think of current not flowing at all during the negative alternation of the source. This would mean that the direction of current through the resistor does not technically change. I thought that if current was flowing through the resistor, it would change direction even if there is a very small amount of current flowing. I did do a good job about thinking of the electrons already in the wires.”
One student’s feedback on another student’s paper
“I understand fully what you are trying to explain!”
On figuring out why a diode works
“If you make the connection to a wire, it’s like how copper atoms…”
“If it wasn’t doped, wouldn’t current flow in both directions?”
Students discussing a shake-to-charge flashlight they are designing
“In our rechargeable flashlight, if you put the switch in parallel with the diode, when it’s closed it will just short it out…”
Student who gave a recruiting presentation at a high school
“The day was a great step up for me that I never ever thought possible. To be able to go back to the high school where I am pretty sure most had given up hope on me and see and hear them tell me how proud they are of me for where I am today is a feeling I will never forget.”
On network analysis
“At first I didn’t understand why we had to learn these complicated methods when we could just do it the simple way you showed us last semester. But when you get to these complicated circuits, it makes it so much easier. I do math every night now, even if I don’t have any for homework, because you have to exercise all the time or you lose it.”
On graphical waveform addition
“I got off to a bad start with this, I had the wrong answers for everything, and I really didn’t know how to do it. I won’t lie. But now after taking all these measurements, I’m starting to understand. And I did really bad on that first quiz — I didn’t even know what DC offset was. But I made up some practice problems that are a little bit different from the quiz, and I can do them now.”
On AC voltage, sinusoidal signals, and what the time domain really means
“I just realized that the word ‘electronics’ has the word ‘electron’ in it. ” (x2) (After a conversation about how a sinusoidal signal represents a voltage or current that changes over time)
“Is this why we need DC voltage for electronics — so it doesn’t turn off all the time?”
“In an AC circuit, how to the electrons get their energy back after they’ve lost it?” (I love the insight in this question — the synthesis of ideas, the demand for a coherent cause)
While presenting some routine lab measurements
“How does an electron know how much voltage to drop in each component?” (7 months later, students are suddenly gobsmacked by the totally weird implications of Kirchoff’s Voltage Law)
During a one-on-one discussion of the group’s interpersonal dynamics
“I find no one in this program is looking for someone to give them the answers. We might text all night long about homework but it’s never ‘Can you send me X,’ it’s always ‘How can I figure out X?’ “
While whiteboarding some AC circuit data
“I don’t like saying that KVL applies to instantaneous voltages, because it applies everywhere.”
“But if you say instantaneous, it applies in a general sense. Have you ever seen an AC circuit where the component voltages didn’t add up to the supply?”
Another whiteboarding session
“Make sure you’re talking about electrons, otherwise it’s not a cause!”
“And that’s supported by the model, because…”
While designing an experiment
“Do you have a 1uF capacitor?” “No, I guess we can use 100uF and scale it…” (Students making big gains in proportional reasoning)
After discussing how a capacitor’s voltage approaches an asymptote
“I never noticed before how much math relates to life — like the idea that sometimes the closer you get to something, the harder it is to get there. I guess it’s not surprising — because math comes from life. Math is everything.”
I wrote recently about creating a rubric to help students analyze their mistakes. Here are some examples of what students wrote — a big improvement over “I get it now” and “It was just a stupid mistake.”
The challenge now will be helping them get in the habit of doing this consistently. I’m thinking of requiring this on reassessment applications. The downside would be a lot more applications being returned for a second draft, since most students don’t seem able to do this kind of analysis in a single draft.
Understand What’s Strong
“I thought it was a parallel circuit, and my answer would have been right if that was true.”
“I got this question wrong but I used the idea from the model that more resistance causes less current and less current causes less power to be dissipated by the light bulbs.”
“The process of elimination was a good choice to eliminate circuits that didn’t work.”
“A good thing about my answer is that I was thinking if the circuit was in series, the current would be the same throughout the circuit.”
Diagnose What’s Wrong
“The line between two components makes this circuit look like a parallel circuit.”
“What I don’t know is, why don’t electrons take the shorter way to the most positive side of the circuit?”
“I made the mistake that removing parallel branches would increase the remaining branches’ voltage.”
“What I didn’t realize was that in circuit 2, C is the only element in the circuit so the voltage across the light bulb will be the battery voltage, just like light bulb A.”
“I looked at the current in the circuit as if the resistor would decrease the current from that point on.”
“I think I was thinking of the A bulb as being able to move along the wire and then it would be in parallel too.”
“What I missed was that this circuit is a series-parallel with the B bulb in parallel with a wire, effectively shorting it out.”
“What I did not realize at first about Circuit C was that it was a complete circuit because the base of the light bulb is in fact metal.”
“I thought there would need to be a wire from the centre of the bulb to be a complete circuit.”
“I wasn’t recognizing that in Branch 2, each electron only goes through one resistor or the other. In Branch 1, electrons must flow through each resistor.”
“I was comparing the resistance of the wire and not realizing the amount of distance electrons flowed doesn’t matter because wire has such low resistance either way.”
“My problem was I wasn’t seeing myself as the electrons passing through the circuit from negative to positive.”
“In this circuit, lightbulb B is shorted so now all the voltage is across light bulb A.”
“When there is an increase in resistance, and as long as the voltage stays constant, the current flowing through the entire circuit decreases.”
“After looking into the answer, I can see that the electrons can make their way from the bottom of the battery to the middle of the bulb, then through the filament, and back to the battery, because of metal conducting electrons.”
“To improve my answer, I could explain why they are in parallel, and also why the other circuits are not parallel.”
“I can generalize this by saying in series circuits, the current will stay the same, but in parallel circuits, the current may differ.”
“From our model, less resistance causes more current to flow. This is a general idea that will work for all circuits.”
I expect students to correct their quizzes and “write feedback to themselves” when they apply for reassessment. The content that I get varies widely, and most of it is not very helpful, along the lines of
I used the wrong formula
I forgot that V = IR
It was a stupid mistake, I get it now.
I was inspired by Joss Ives’ post on quiz reflection assignments to get specific about what I was looking for. This all stems from a conversation I had with Kelly O’Shea about two years ago, back when I had launched myself into standards-based/project/flipped/inquiry/Socratic/mindset/critical thinking/whatnot all at once and unprepared, that has been poking its sharp edges into my brain ever since:
Me: Sometimes I press them to be specific about what they learned or which careless mistake they need to guard against in the future. It’s clear that many find this humiliating, some kind of ingenious psychological punishment for having made a mistake. Admitting that they learned something means admitting they didn’t know it all along, and that embarrasses them. Does that mean they’re ashamed of learning?
Kelly: How often do you think they’ve practiced the skill of consciously figuring out what caused them to make a mistake? How often do we just say, “That’s okay, you’ll get it next time.” instead of helping them pick out what went wrong? My guess is that they might not even know how to do it.
Me: *stunned silence*
So this year I developed this.
Phases of Feedback
- Understand what you did well
- Diagnose why you had trouble
Steps 1 and 3 can be used even for answers that were accepted as “correct.”
This has yielded lots of interesting insight, as well as some interesting pushback. Plus, it gave me an opportunity to help my students understand what exactly “generalize” mean. In a future post I’ll try to gather up some examples. Overall, it’s helped me communicate what I expect, and has helped students develop more insight into their thinking as well as the physics involved.
My standard (informal) course feedback form asks,
- What do you like or dislike about the grading system?
- How does the grading system affect your learning?
- What do you love about this course?
- What do you hate about this course?
- What would you change about this course?
The 2nd-year courses are less science and more engineering, so my approach is less inquiry and more project-based. In particular, in the course they’re evaluating, there’s an independent project where students must define their project, set their own deadlines, set their own evaluation scheme, then grade themselves. It’s worth a quarter of their grade. I reserve the right to veto a mark, but I’ve never done it. Here’s a sample of the feedback I got from 2nd year students last week.
1. Grading system
- Love reassessment (2)
- Feel dependent on ActiveGrade
- Need quicker way of knowing when a test is corrected
- Love the independent project
- Make reassessment deadline start when grade is updated?
- Ability to do skills on your own time. But they can also pile up.
- Clearly shows what you need to know
- Retests help a lot with understanding because you know what you need to improve on
- Showing improvement helps solidify thoughts
2. Effects of Grading System
- Reassessing forces you to gain understanding instead of “I failed that let’s move on”
- I can thoroughly explain certain circuits from my head, I could not do that before.
- Helpful — I can choose to not finish a lab if I do not understand it fully, then ask questions and come back to it
- I knew nothing about electronics before this course but skill based learning has really helped me understand many topics
- Reassessing forces you to gain understanding instead of “I failed that let’s move on”
- Lab work — hands on feel
- Making things work and understanding what they do
- Retests, doing something more than once makes remembering it easier.
- Lack of info on notch filter (2)
- Lack of time
- Hands on – when you don’t quite understand something, lab work refines understanding
- It’s a pretty refined, good system. Once you know something, it sticks with you.
- More time to learn. 3 years?
- Reassessment deadlines
Last month, I was asked to give a 1hr 15 min presentation on peer assessment to a group of faculty. It was part of a week-long course on assessment and evaluation. I was pretty nervous, but I think I managed to avoid most of the pitfalls. The feedback was good and I learned a lot from the questions people asked.
Some Examples of Feedback
“Hopefully by incorporating more peer assessment for the simple tasks will free up more of my time to help those who really need it as well as aiding me in becoming more creative instead of corrective”
“You practiced what you were preaching”
“The forms can be changed and used in my classes”
“Great facilitator — no jargon, plain talk, right to the point! Excellent. Very useful.”
“You were great! I like you! Good job! (sorry about that) “
“Although at first, putting some of the load on the learner may seem lazy on the part of the instructor, in actual fact, the instructor may then be able to do even more hands on training, and perhaps let thier creativity blossom when unburdened by “menial tasks”.”
“Needed more time”
“Good quality writing exercise was a bit disconnected”
“Finally a tradeswoman who can relate to the trades”
In a peer assessment workshop, participants’ assessments of me have the interesting property of also assessing them. The comments I got from this workshop were more formative than I’m used to — there were few “Great workshop” type comments, and more specific language about what exactly made it good. Of course, I loved the humour in the “You were great” comment shown above – if someone can parody something, it’s pretty convincing evidence of understanding. I also loved the comment about before-thinking and after-thinking, especially the insight into the fear of being lazy, or being seen as lazy.
Last but not least, I got a lot of verbal and non-verbal feedback from the tradespeople in the room. They let me know that they were not used to seeing a tradesperson running the show, and that they really appreciated it. It reinforced my impressions about the power of subtle cues that make people feel welcome or unwelcome (maybe a post for another day).
- Peer assessment is a process of having students improve their work based on feedback from other students
- To give useful feedback, students will need clear criteria, demonstrations of how to give good feedback, and opportunities for practice
- Peer assessment can help students improve their judgement about their own work
- Peer assessment can help students depend less on the teacher to solve simple problems
- Good quality feedback should include a clear statement of strengths and weaknesses, give specific ideas about how to improve, and focus on the student’s work, not their talent or intelligence
- Feedback based on talent or intelligence can weaken student performance, while feedback based on their work can strengthen it
I distributed this handout for people to follow. I used three slides at the beginning to introduce myself (via the goofy avatars shown here) and to show the agenda.
I was nervous enough that I wrote speaking notes that are almost script-like. I rehearsed enough that I didn’t need them most of the time.
Avoiding Pitfall #1: People feeling either patronized or left behind
I started with definitions of evaluation and assessment, and used flashcards to get feedback from the group about whether my definitions matched theirs. I also gave everyday examples of assessment (informal conversations) and evaluation (quizzes) so that it was clear that, though the wording might sound foreign, “evaluation” and “assessment” were everyday concepts. There were definitely some mumbled “Oh! That’s what they meant” comments coming from the tables, so I was glad I had taken a few minutes to review. At the same time, by asking people if my definitions agreed with theirs, I let them know that I knew they might already have some knowledge.
After introducing myself and the ideas, I asked the participants to take a few minutes to write if/how they use peer assessment so far, and what questions they have about peer assessment. Questions fell into these categories:
- How can I make sure that peer assessment is honest and helpful, not just a pat on the back for a friend, or a jab at someone they don’t like, or lashing out during a bad day?
- What if students are too intimidated/unconfident to share their work with their peers? (At least one participant worried that this could be emotionally dangerous)
- Why would students buy in — what’s in it for the assessor?
- When/for what tasks can it be used?
- Logistics: does everyone participate? Is it required? Should students’ names be on it? Should the assessment be written?
- How quick can it be? We don’t have a lot of time for touchy-feely stuff.
- Can this work with individualized learning plans, where no two students are at the same place in the curriculum?
I really didn’t see these questions coming. I was struck by how many people worried that peer assessment could jeopardize their students’ emotional well-being. That point was raised by participants ranging from School of Trades to the Health & Human Services faculty.
It dawned on me while I was standing there that for many people, their only experience of peer assessment is the “participation” grade they got from classmates on group projects, so there is a strong association with how people feel about each other. I pointed that out, and saw lots of head nodding.
Then I told them that the kind of peer assessment I was talking about specifically excluded judging people’s worth or discussing the reviewer’s feelings about the reviewee. It also wasn’t about group projects. We were going to assess solder joints, and I had never seen someone go home crying because they were told that a solder joint was dirty. It was not about people’s feelings. It was about their work.
I saw jaws drop. Some School of Trades faculty actually cheered. It really gave me pause. In these courses, and in lots of courses about education, instructors encourage us to “reflect,” and assignments are often “reflective pieces.” I have typically interpreted “reflect” to mean “assess” — in other words, analyze what went well, what didn’t, why, and what to do about it. My emotions are sometimes relevant to this process, and sometimes not. I wonder how other people interpret the directive to “reflect.” I’m starting to get the impression that at least some people think that instructors require them to “talk about your emotions,” with little strategy about why, what distinguishes a strong reflection from a weak one, or what it is supposed to accomplish.
How to get honest peer assessments?
I talked briefly about helping students generate useful feedback. One tactic that I used a lot at the beginning of the year was to collect all the assessments before I handed them to the recipient. The first few times, I wrote feedback on the feedback, passed it back to the reviewer, and had them do a second draft (based on definite criteria, like clarity, consistency, causality). Later, I might collect and read the feedback before giving it back to the recipient. I never had a problem with people being cruel, but if that had come up, it would have been easy enough to give it back to the reviewer (and have a word with them).
Another way to lower the intimidation factor is to have everyone assess everyone. This gives students an incentive to be decent and maybe a bit less clique-ish, since all their classmates will assess them in return. It also means that, even if they get some feedback from one person that’s hard to take, they will likely have a dozen more assessments that are quite positive and supportive.
Students are reluctant to “take away points” from the reviewee, so it helps that this feedback does not affect the recipient’s grade at all. It does, however, affect the reviewer’s grade; reviewing is a skill on the skill sheet, so they must complete it sooner or later. Students are quick to realize that it might as well be sooner. Also, I typically do this during class time, so I had a roughly 100% completion rate last year.
How to get useful peer assessments?
I went ahead with my plan to have workshop participants think about solder joints. A good solder joint is shiny, smooth, and clean. It has to meet a lot of other criteria too, but these three are the ones I get beginning students to focus on. I showed a solder joint (you can see it in the handout) and explained that it was shiny and clean but not smooth.
Then I directed the participants to an exercise in the handout that showed 8 different versions of feedback for that joint (i.e. “This solder joint is shiny and clean, but not smooth”), and we switched from assessing soldering to assessing feedback. I asked participants to work through the feedback, determining if it met these criteria:
- Identifies strengths and weaknesses
- Gives clear suggestion about what to do next time
- Focusses on the student’s work, not their talent or intelligence
We discussed briefly which feedback examples were better than others (the example I gave above meets criteria 1 and 3, but not 2). This got people sharing their own ideas about what makes feedback good. I didn’t try to steer toward any consensus here; I just let people know if I understood their point or not. Very quickly, we were having a substantive discussion about quality feedback, even though most people had never heard of soldering before the workshop. I suggested that they try creating an exercise like this for their own classroom, as a way of clarifying their own expectations about feedback.
Avoiding Pitfall #2: This won’t work in my classroom
Surprisingly, this didn’t come up at all.
I came back often to the idea that there are things students can assess for each other and there are things they need us for. I made sure to reiterate often that each teacher would be the best judge of which tasks were which in their discipline. I also invited participants to consider whether a student could fully assess that task, or could they only assess a few of the simpler criteria? Which criteria? What must the students necessarily include in their feedback? What must they stay away from, and how is this related to the norms of their discipline? We didn’t have time to discuss this. If you were a participant in the workshop and you’re reading this, I’d love to hear what you came up with.
Pitfall #3: Disconnected/too long
Well, I wasn’t able to avoid this. After talking about peer assessments for soldering and discussing how that might generalize to other performance tasks, I had participants work through peer assessment for writing. I told them that their classmate Robin Moroney had written a summary of a newspaper article (which is sort of true — the Wall Street Journal published Moroney’s summary of Po Bronson’s analysis of Carol Dweck’s research), and asked them to write Robin some feedback. They used a slightly adjusted version of the Rubric for Assessing Reasoning that I use with my students (summarize, connect to your own experience, evaluate for clarity, consistency, causality). We didn’t really have time to discuss this, so Dweck’s ideas got lost in the shuffle, and I was only able to nod toward the questions we’d collected at the beginning, encouraging people to come talk afterwards if their questions hadn’t been fully answered.
Questions that didn’t get answered:
Some teachers at the college use an “individualized system of instruction” — in other words, it is more like a group tutoring session than a class. The group meets at a specified time but each student is working at their own pace. I didn’t have time to discuss this with the teacher who asked, but I wonder if the students would benefit from assessing “fake” student work, or past students’ work (anonymized), or the teacher’s work?
One teacher mentioned a student who was adamant that peer assessment violated their privacy, that only the teacher should see it. I never ran into this problem, so I’m not sure what would work best. A few ideas I might try: have students assess “fake” work at first, so they can get the hang of it and get comfortable with the idea, or remove names from work so that students don’t know who they’re assessing. In my field, it’s pretty typical for people to inspect each other’s work; in fields where that is true, I would sell it as workplace preparation.
We didn’t get a chance to flush out decision-making criteria for which tasks would benefit from peer assessment. My practice has been to assign peer assessment for tasks where people are demonstrating knowledge or skill, not attitude or opinion. Mostly, that’s because attitudes and opinions are not assessable for accuracy. (Note the stipulative definitions here… if we are discussing the quality of reasoning in a student’s work, then by definition the work is a judgment call, not an opinion). I suppose I could have students assess each other’s opinions and attitudes for clarity — not whether your position is right or wrong, but whether I can understand what your position is. I don’t do this, and I guess that’s my way of addressing the privacy aspect; I’d have to have a very strong reason before I’d force people to share their feelings, with me or anyone else.
Obviously I encourage students to share their feelings in lots of big and small ways. In practice, they do — quite a lot. But I can’t see my way clear to requiring it. Partly it’s because that is not typically a part of the discipline we’re in. Partly it’s because I hate it, myself. At best, it becomes inauthentic. The very prospect of forcing people to share their feelings seems to make them want to do it less. It also devalues students’ decision-making about their own boundaries — their judgment about when an environment is respectful enough toward them, and when their sharing will be respectful toward others. I’m trying to help them get better at making those decisions themselves — not make those decisions for them. Talking about this distinction during peer assessment exercises gives me an excuse to discuss the difference between a judgment and an opinion. Judgments are fair game, and must be assessed for good-quality reasoning. Opinions are feelings are not. We can share them and agree or disagree with them, but I don’t consider that to be assessment.
Finally, a participant asked about how to build student buy-in. Students might ask, what’s in it for me? What I’ve found is that it only takes a round or two of peer assessments for students to start looking forward to getting their feedback from classmates. They read it voraciously, with much more interest than they read feedback from me. In the end, people love reading about themselves.
The teacher’s skill sheet was a success (thanks, Dan). Today was our third day with the first-year students, and my first time explaining skills-based-grading to an incoming class. Our reassessment period is Thursdays from 2:30 – 4:30, so in this morning’s shop class I dropped a skill sheet on their benches and we started using it. By the time I started explaining how I grade this afternoon, they already had a skill signed off.
I handed out their skills folders and the first two skill sheets for DC circuits. You should have seen their jaws drop when I explained that they can choose if, when, and how often they reassess. They asked great questions and gave thoughtful answers. We talked about how everyone progresses, the many ways of getting extra help, learning at your own pace, and the infinite ways of demonstrating improvement or proficiency. They wanted to know what is proof of improvement (required when applying for reassessment), and had suggestions (quiz corrections, practice problems, written explanations). They wanted to know what level 5 questions are, where to find some, and how to prevent them from getting too big. Many of them had ideas in mind already and we bounced those around to see if they meet the criteria (at least two skills, and you have to choose the problem-solving approach yourself, so it can’t be the same as something we’ve done in class).
We talked about how and why you couldn’t get credit for level 4 until you’ve completed level 3. I explained it in terms of employers’ expectations about basic skills. One student explained it back to me in terms of “levelling up your character” in role-playing games. We talked about feedback, from me and from themselves. I gave examples of feedback that does and does not help you improve (“I need to figure out why V and I are different” compared to “I don’t get it.”). We talked about how many points homework is worth (none). My get-to-know-you survey tells me there are a lot of soccer players in the room, so we talked about practices and push ups. “Do you get points in the league standings for showing up to practice? What about for going to the gym?” I asked. Of course they said no. “So why do it if it’s not worth points?” They got this right away. “It helps you win the game.” “It makes you stronger.”
I enjoyed this conversation:
Student A: “So homework is just for learning.”
Me: “What are you talking about? I thought homework was for sucking up to the teacher.”
Student B: “I thought so too. That’s why I never did it.”
Student C: “I thought homework was for keeping kids in their homes at night.”
Once the questions had died down, I gave them a copy of a skills sheet that looks just like the ones I use to assess them, except that all the skills relate to my teaching. I asked them to sign and date next to any items they had evidence that I had done. I did this so I could find out if they really understood how to use the thing. But it had unexpectedly positive side-effects. From a quick glance, they could tell that I was going to get a “failing” grade. It never occurred to me that they would be upset by this.
They had barely started reading when I started hearing gasps. “You’re failing!” someone called out. “Is our assessment of you going to affect your assessment of us?” someone else half-joked. “Of course I’m not passing yet,” I replied reasonably. “It’s the second day of class. There’s no possible way I could have done 60% of my job by now. That’s how it works: you start at 1, then you move up to 2.” I walked around and peeked over shoulders to make sure they got the mechanics of what to fill in where. I stopped a couple of times to talk to people who seemed to have overly generous assessments. “How have I demonstrated that?” I asked.
We reviewed it together. We got to practice technical reading in tiny, learning-outcome-sized pieces. The highly condensed text on a skill sheet changes meaning if you miss a preposition. Another unexpected side-effect: my students had noticed me doing things that I hadn’t noticed myself. They had evidence to support most of their claims, too. There were a few that I disagreed with because I had only demonstrated part of the skill, and I modelled the kind of feedback that my “teacher” could have given me to help me improve.
Overall, they seemed very concerned about my feelings about “failing;” we calculated my current topic score at 0.5/5 and filled in the bar graph on the front of the skill sheet with today’s date. I got a chance to model a growth mindset. I made sure to let them see how proud I am of having achieved a 0.5 in only two days’ work, and mentioned that this is an improvement over two days ago, when I had a zero. The usual running commentary of tongue-in-cheek jibes had a disarmingly earnest, reassuring tone. “I know that you can improve your score the next time you reassess,” one student said. Another student chimed in with “feel free to drop in to my office anytime if you want to get some feedback.”
As I get ready to launch into my second September, I’ve gone over the feedback from last spring. If you’ve read since the beginning, you know that last December, half of my class was failing and the rest were bored. There was a lot of “why do we have to learn this?” and “is this on the test?”
By the end of this semester, no one failed, and there were some remarkable changes in our classroom culture. One of my colleagues said “when I check labs now, they show me which findings they think are important, instead of waiting for me to tell them what important things they should have found.”
I did some informal evaluations (I stole these questions from Robert Talbert at Casting Out Nines, and they worked well.) I started getting feedback that sounded like this.
What do you like/dislike about the grading system?
Like: Keep trying skills until you understand it
I’ve actually grown pretty fond of the skill system. I like that you actually make us demonstrate our knowledge of the individual skills, it actually helps me remember better sometimes, specially when going over quizzes. The only thing I don’t like is that to get a skill checked off, mainly in the shop, it can take a long time.
The grading system works very well although I think using the skills for every aspect of the course is a little too flexible. Using skills for the lab and going back to regular marked assignments. I need more room, I will talk to you later.
Skills for quiz bad idea. I had no ambition to study for test/quizzes. I like the shop skills tho.
I dislike the unstructured feel of it, simply because I do better with the assignments/tests, but I do like the ability to retest on a skill if you don’t get it the first time.
Independent learning project was fantastic and incredibly valuable in the long run.
I really appreciate you trying something new, and already there is a huge improvement. I hope you continue to innovate and improve the system.
I think the skills are very straight forward, they let us know exactly what you’re looking for.
It all encourages independence, which is great, unless you’re unmotivated.
There needs to be more communication.
Without marks to fuel my ego, I lost my drive to excel.
I think it helps focus more on the important stuff, and less on just completing useless lab stuff.
I was able to learn more with a smaller [work] load. This gave me time to play and experiment, by approaching labs in a way that was helpful to me.
Yes. It’s taken the good parts out of the lab book and made them easy to learn.
It certainly kept me on my toes to make sure that I understood what was needed to do the labs and the tests.
Yes. Previously, I would be missing a small piece of the “puzzle,” this way I know what I need to do.
What do you LOVE about this course?
A lot more feedback this semester, understand concepts easier
The learning environment, the flexibility…
I love that I am actually doing well in this course…
The ability to work at your own pace (even though you have to remember not to procrastinate)
Designing my own labs
I feel that education has in general become stagnant, and I was delighted to have a teacher who was willing to try something new. I know this takes courage and a lot of hard work. Having 25% of my mark based on a project I was able to pick and have it graded in a way that suited me was a blast.
All the freedom
The instruction and the easy feeling that one understands what is being taught.
I liked the independent learning project, even if I had been a bit too ambitious in my designs and dreams
What do you HATE about this course?
I wouldn’t say I hate anything really except there’s a lot of work sometimes.
Quizzes! don’t do well on them, if get one part wrong, all wrong
Other students asking questions on things we have already covered in class, then interrupting the instructor when trying to respond
If you could change ONE THING about this course, what would it be?
More level 5 questions on tests. It is necessary to go above and beyond to get 100% on most modules.
Give assignment due dates.
More availability with students during lab time.
Harder deadlines, required milestones for the self-directed project
Level 5 questions: being bonus because sometimes difficult or busy time schedules to get one ready and do research
Include marked assignments somehow
To have a mix of skills and assignments
Points for homework so I’m more motivated to do it
More hands-on and practicing circuits
Any other comments about the course or the teacher?
Keep on getting better, you are doing a service to your students by furthering education.
I really enjoyed the year. I just wish we had the skill program for the first year as well.
I like this semester better than last semester. Keep up the good work!
My students are awesome, and almost as invested in developing me as I am in developing them.
Students really get reassessment. Not a bad place to start when introducing the “sales pitch.”
They want more feedback, and they’re asking for it explicitly. This is fantastic. I require work samples as part of an application for reassessment now, so that should help. I’ll also be experimenting with BlueHarvest.
Reassessment changed the concept of “studying.” I think this is a good thing. I suspect that what they mean by “study” is “do a long series of identical problems until you’ve got the procedure memorized,” and I’m ok with letting go of that. At the same time, I need to spend more time helping them learn to test themselves, so that they’re not relying exclusively on my tests as a way to diagnose and learn.
It made them look hard at who they are, what they want, and why they do what they do. I need to be ready for that. Students probably could use some preparation for it too.
It exposed the squirming, seething reality of the differences between my expectations about teaching and their expectations about learning. Dan Goldner’s got a great idea about how to clarify what the teacher’s job is, and I’m going to try it.
But hands-down the most fascinating thing that happened this past semester was that my students begged for homework. Many interesting conversations ensued (post about this forthcoming). Removing points for homework may have been the single most useful thing I did all year. To be continued.
My partner’s 15-year-old daughter can place every element on a blank periodic table in under 6 minutes. Her favourite YouTube video is a song about scientific experiments (see above). She tells jokes about Heisenberg and Schrödinger. And she doesn’t like physics.
It’s not that she doesn’t like the class or her classmates; she’s in Grade 10, where physics is just a unit in a semester-long course, and she doesn’t object to the other units. It’s not that she doesn’t like the teacher (same rationale). It’s not that she doesn’t like thinking hard, or tricky puzzles, or things that other kids find uncool. As evidence, I submit that last fall she read Twelfth Night for fun, just because it was sitting on a coffee table; last weekend, she taught herself to play chess (which she knew absolutely nothing about) by losing to the computer and analyzing its moves; and she has been known to go to class wearing a tie, a fedora, and/or pi-day pins.
I asked her what they were working on in this “physics.” Answer: displacement and velocity. (Before you conclude that this itself is the problem, note that she was already dreading it before it started.) She tells me she thinks the work is pointless, all they do is answer questions where the answer for distance is “10m” and the answer for displacement is “10m north.” Over, and over, and over.
So I feed her some examples that illustrate the difference between distance and displacement, without exactly explaining (you walk around the block. How far did you walk? How far did you get?). Over the course of the next two days, during quiet moments in other conversations, she pipes up with questions, all of which I avoid answering directly but encourage her to give me examples that explain her thinking. “Can distance and displacement be different numbers?” “Does that mean that distance and displacement will be different if you make any turns?” “Can displacement ever be higher than distance?” “Does that mean that velocity can never be higher than speed?”
She thinks about these things. For fun. Over Sunday brunch. But she “doesn’t like” physics.
On her interim report card, she’s got 90s in everything (including math) except science, where she got a 77. She’s excited to tell me about her grades, except that when she tells me about science class, she mumbles, looks away, and seems embarrassed. She volunteers, “in physics, there’s a lot of formulas and math and graphs and stuff. I’m hoping to bring my grades up next unit when we do chemistry.”
You know, where there aren’t so many graphs and formulas and math and stuff.
She’s a tough, persevering, open-minded, critical-thinking kid. If she needs high school physics at some point, there are a bunch of ways to get it later, when it has a point for her. I’m not actually worried.
I just wish I knew what to say.
A few people have asked about implementing the “2-copy quiz,” so I thought I would write a bit about what I’m doing, what’s going well so far, and what I realize in hindsight I should have done differently.
Also, I want to say thanks and welcome to the new readers who’ve joined since that post was “Freshly-Pressed.” I’m delighted that you’ve decided to stay. Don’t hesitate to comment on the older items if you are interested — none of these conversations are finished, by a long shot.
Backstory of the 2-Copy Quiz
I got intrigued by the idea of immediate feedback. It’s easy with after-class make-up quizzes, and I was trying to figure out how to do it with in-class quizzes where a large group of people was likely to finish all at once.
1. I could grade the quizzes and hand them back the next day
Too late — students have already forgotten why they wrote reactance when they should have thought about resistance. Also, since the paper’s already graded, they know whether everything’s right or wrong. It takes the question away.
2. I could collect their work on one piece of paper, and they would still have the sheet of questions while we discuss the answers
Better, but still not what I want. They will have forgotten the details of what they wrote and that’s where the devil is. If I present the correct answers in a “clear, well illustrated way, students believe they are learning but they do not engage … on a deep enough level to realize that what was is presented differs from their prior knowledge.” This is a quote from a video about superficial learning made by Derek Muller, of Veritasium science vlog fame. Derek goes on to say that those misconceptions can be cleared up by “presenting students’ misconceptions alongside the scientific concepts.” It was the alongside part I wanted. It’s not until their thoughts and their actions are suddenly brought into focus at the same time that they realize there is a contradiction.
3. I could collect their papers, run to the staff room, photocopy them, and come back to review the answers.
And while I was gone, they squeezed all the burning curiosity out of their questions among themselves. Which is what they normally do in the hallway.
So the conclusion followed: we needed two copies of the quiz. One for me to grade later, one for them to keep while we reviewed the answers right away. One thing I like about this method is that it doesn’t interrupt the learning. It actually removes an interruption that would normally happen (students having to walk out into the hall to talk about the test). By inviting the conversation into the classroom, I can be a part of it if that’s helpful, or I can organize the students into groups and get out of the way.
Goal: for students to assess the goodness of their answer
We often met this goal. Using class time to discuss “rightness” directs their point-chasing energy toward the good judgement I want them to develop (would this be considered educational judo?). If your students are like mine, they will stop at nothing to find out if they “got the right answer.” Sometimes this makes me tired, what with the assumption that there’s a single right answer, and the other assumption that rightness is all that counts. But then I realized that motivation is motivation, and I could probably teach them to jump through flaming hoops or walk on a bed of nails if I put those things between a student who’s just written a test and the “right answers.”
So I put some self-assessment in the way instead. Their desire to “get the right answer” extends to their self-assessment, of course, but the conversations became more nuanced throughout the term. At first there was a lot of “will you accept this answer” and “will you accept that answer.” I tried to help them make inferences about whether an answer is good enough. I also opened myself up to changing my definition of the right answer if they could substantiate their arguments for an alternate perspective. Hell, alternate perspectives and substantiating their thinking are more important than whatever was on the quiz. Later on in the term, I started hearing things like, “No, I don’t think this answer is good enough, it’s a true statement but it doesn’t answer the question,” or “I think this is too vague to be considered proof of this skill.” They’d rather say it before I say it. Which means I have to be really careful what language I use during this conversation. They will repeat it.
I expect the students to write feedback to themselves on their quiz paper. It can be praise or constructive criticism, but there has to be something for each question. They see the value of this later when they’re studying to reassess, but it’s a hard sell at first, and I realized after a few weeks that my students actually had no idea how to do it. For a while, I collected their worksheets at the end of class to read and write back to them. But I don’t pass back the answer sheets that I correct. If they know that I’m going to give the answer and some feedback, it takes the responsibility off of them to do it for themselves.
What worked well
- It’s easy and cheap. Just print off 2 quiz papers for every student, and have them fill out both.
- It’s flexible. You could have them make two full copies of their work. You could ask them to make a full copy for themselves and an answer copy for the teacher (my tactic at the moment). You could ask them to make an answer copy for the teacher, and some rough notes for themselves so they can remind themselves of their thinking (what my students actually do).
- In keeping with the idea of going with the flow of the learning, I let the class direct the questioning. There’s no reason we have to review the first question first. Often there’s one question that everyone is dying to know the answer to, so we talk about that one.
- I get an instant archive of student work. Good for preparing my lesson plans next year, reconstituting my gradebook when a computer crashes, turning over the course to another instructor, submitting documentation to accrediting agencies, etc. etc.
What didn’t always work well
- It’s time-consuming to have to copy things to another page. For numerical answers, it’s pretty easy to copy the final answer, but then you can’t see their work. For short-answer/essay questions, it’s going to get seriously annoying for students to copy them in full to another page (I make them do it anyway). Multiple-choice is pretty painless, but it’s a pain to feel limited to one kind of question.
- Students don’t always see the value of having their own copy, so they fill out my copy and leave theirs blank. See Backstory #2 above.
- Students don’t always see the value of showing their work, so they fill out two copies with nothing but answers. See Backstory #2 above.
- Students don’t always see the value of assessing their work at all. The teacher is going to decide the final grade, and the teacher might disagree with their self-assessment, so why not just wait and let “the experts” make the judgement call.
- Students don’t always see the value of writing feedback to themselves.
- Students sometimes have no idea how to write feedback to themselves.
I struggled with the attitude of “wait for the teacher to decide if it’s good enough.” I should have made it clearer that improving their ability to evaluate their answers was the point, not a side-effect. I deliberately held off updating my online gradebook, so that they had to depend on themselves to track their skills (just got my student evals back today and my “poor tracking” of their grades is the #1 complaint). It’s said best by Shawn Cornally from Think Thank Thunk: “I am not your grade’s babysitter.“ In fact I sometimes wondered if I should stop using the online gradebook altogether. Yes, sometimes I disagree with their self-assessment; that’s why it’s important for them to take part in the group discussion after the quiz. That’s where I discuss what I’m looking for in an answer and help them figure out if they’ve provided it. This is hard on them, and makes them feel insecure, for lots of reasons, and I need to keep thinking about it.
One reason is that writing feedback is something I realized (a bit late) that I had to teach. I did this in a hurry and without the scaffolding it deserved. Kelly O’Shea of Physics! Blog! broke it down for me:
How often do you think they’ve practiced the skill of consciously figuring out what caused them to make a mistake? How often do we just say, “That’s okay, you’ll get it next time.” instead of helping them pick out what went wrong? My guess is that they might not even know how to do it.
- I’m still not sure how to teach them to create feedback for themselves, but it goes to the top of the pile of things to introduce in September next year, not February.
- I’m toying with the idea that the students should keep an online gradebook updated. Then I could check up on their scoring (and leave them some feedback about it), instead of them checking up on my scoring, and being annoyed that it’s not posted yet. Not sure logistically how to do this. (Edit: ActiveGrade is already working on this)
- A portable scanner. For $300 I could solve Didn’t-Work #1, 2, and 3. Just scan their quiz papers as they finish. Makes it extra-easy for me to annotate the electronic copy and maybe make a screencast for a particular student, if warranted. Saves trees, too.
Update, July 29, 2011: If you already own a smartphone, the portable scanner is free, and it’s called CamScanner.