You are currently browsing the category archive for the ‘Mastery’ category.
By last winter, the second year students were pretty frustrated. They were angry enough about the workload to go to my department head about it. The main bone of contention seemed to be that they had to demonstrate proficiency in things in order to pass (by reassessing until their skills met the criteria), unlike in some other classes where actual proficiency was only required if you cared about getting an A. Another frequently used argument was, “you can get the same diploma for less work at [other campus.]” Finally, they were angry that my courses were making it difficult for them to get the word “honours” printed on their diploma. *sigh*
It was hard for me to accept, especially since I know how much that proficiency benefits them when competing for and keeping their first job. But, it meant I wasn’t doing the Standards-Based Grading sales pitch well enough.
Anyway, no amount of evidence-based teaching methods will work if the students are mutinous. So this year, I was looking for ways to reduce the workload, to reduce the perception that the workload is unreasonable, and to re-establish trust and respect. Here’s what I’ve got so far.
1. When applying for reassessment, students now only have to submit one example of something they did to improve, instead of two. This may mean doing one question from the back of the book. I suspect this will result in more students failing their reassessments, but that in itself may open a conversation
2. I’ve added a spot on the quiz where students can tell me whether they are submitting it for evaluation, or just for practise. If they submit it for practise, they don’t have to submit a practise problem with their reassessment application, since the quiz itself is their practise problem. They could always do this before, but they weren’t using it as an option and just pressuring themselves to get everything right the first time. Writing it on the quiz seems to make it more official, and means they have a visible reminder each and every time they write a quiz. Maybe if it’s more top-of-mind, they’ll use it more often.
3. In the past, I’ve jokingly offered “timbit points” for every time someone sees the logic in a line of thinking they don’t share. At the end of the semester, I always bring a box of timbits in to share on the last day. In general, I’m against bribery, superficial gamification (what’s more gamified than schooling and grades??), and extrinsic motivation, but I was bending my own rules as a way to bring some levity to the class. But I realized I was doing it wrong. My students don’t care about timbits; they care about points. My usual reaction to this is tight-lipped exasperation. But my perspective was transformed when Michael Doyle suggested a better response: deflate the currency.
So now, when someone gives a well-thought-out “wrong” answer, or sees something good in an answer they disagree with, they get “critical thinking points“. At the end of the semester, I promised to divide them by the number of students and add them straight onto everyone’s grade, assuming they completed the requirements to pass. I’m giving these things out by the handful. I hope everybody gets 100. Maybe the students will start to realize how ridiculous the whole thing is; maybe they won’t. They and I still have a record of which skills they’ve mastered; and it’s still impossible to pass if they’re not safe or not employable. Since their grades are utterly immaterial to absolutely anything, it just doesn’t matter. And it makes all of us feel better.
In the meantime, the effect in class has been borderline magical. They are falling over themselves exposing their mistakes and the logic behind them, and then thanking and congratulating each other for doing it — since it’s a collective fund, every contribution benefits everybody. I’m loving it.
4. I’ve also been sticking much more rigidly to the scheduling of when we are in the classroom and when we are in the shop. In the past, I’ve scheduled them flexibly so that we can take advantage of whatever emerges from student work. If we needed classroom time, we’d take it, and vice versa. But in a context where people are already feeling overwhelmed and anxious, one more source of uncertainty is not a gift. The new system means we are sometimes in the shop at times when they’re not ready. I’m dealing with this by cautiously re-introducing screencasts — but with a much stronger grip on
reading comprehension comprehension techniques. I’m also making the screencast information available as a PDF document and a print document. On top of that, I’m adopting Andy Rundquist’s “back flip” technique — screencasts are created after class in order to answer lingering questions submitted by students. I hope that those combined ideas will address the shortcomings that I think are inherent in the “flipped classroom.” That one warrants a separate post — coming soon.
The feedback from the students is extremely positive. It’s early yet to know how these interventions affect learning, but so far the students just seem pleased that I’m willing to hear and respond to their concerns, and to try something different. I’m seeing a lot of hope and goodwill, which in themselves are likely to make learning (not to mention teaching) a bit easier. To be continued.
I’m thinking about how to make assessments even lower stakes, especially quizzes. Currently, any quiz can be re-attempted at any point in the semester, with no penalty in marks. For a student who’s doing it for the second time, I require them to correct their quiz (if it was a quiz) and complete two practise problems, in order to apply for reassessment. (FYI, it can also be submitted in any alternate format that demonstrates mastery, in lieu of a quiz, but students rarely choose that option).
The upside of requiring practise problems is eliminating the brute-force approach where students just keep randomly trying quizzes thinking they will eventually show mastery (this doesn’t work, but it wastes a lot of time). It also introduces some self-assessment into the process. We practise how to write good-quality feedback, including trying to figure out what caused them to make the mistake.
The downside is that the workload in our program is really unreasonable (dear employers of electronics technicians, if you are reading this, most hard-working beginners cannot go from zero to meeting your standards in two years. Please contact me to discuss). So, students are really upset about having to do two practise problems. I try to sell it as “customized homework” — since I no longer assign homework practise problems, they are effectively exempting themselves from any part of the “homework” in areas where they have already demonstrated proficiency. The students don’t buy it though. They put huge pressure on themselves to get things right the first time, so they won’t have to do any practise. That, of course, sours our classroom culture and makes it harder for them to think well.
I’m considering a couple of options. One is, when they write a quiz, to ask them whether they are submitting it to be evaluated or just for feedback. Again, it promotes self-assessment: am I ready? Am I confident? Is this what mastery looks and feels like?
If they’re submitting for feedback, I won’t enter it into the gradebook, and they don’t have to submit practise problems when they try it next (but if they didn’t succeed that time, it would be back to practising).
Another option is simply to chuck the practise problem requirement. I could ask for a corrected quiz and good quality diagnostic feedback (written by themselves to themselves) instead. It would be a shame, the practise really does benefit them, but I’m wondering if it’s worth it.
All suggestions welcome!
Can my students use their skills in real-world situations? Heck, can they use their skills in combination with any single other skill in the curriculum? When I was redesigning my grading system, I needed a way to find out. It’s embedded in the “levels” of skills that I use, so I’ll explain those first.
What are these “levels” you keep talking about?
For every curriculum unit, students get a “skill sheet” listing both theory and shop skills. Here’s an example of the “theory” side of a unit of my Electric Machines course. (For a complete skills sheet, showing how theory skills correspond to shop skills, and the full story of how I use them, see How I Grade). If I were starting this unit over, I would improve the descriptions of each skill (“understand X, Y, and Z” isn’t very clear to the students) and make the formats consistent (the first four are noun phrases, the last one is a complete sentence; things like that annoy me). But this should give enough info to illustrate.
So, about synthesis…
Realistically, all skills involve synthesis. The levels indicate complexity of synthesis, not whether synthesis is involved at all. My goal is to disaggregate skills only as far as I need to figure out what they need to improve — and no further.
For example, in the unit shown above, wound-rotor induction motors are at level-2. That’s because they’re functionally almost identical to squirrel-cage motors, which we studied in the previous unit, and the underlying concepts help students understand the rest of the unit.
Quiz question: List one advantage and one disadvantage of wound-rotor induction motors compared to squirrel-cage motors.
Danger: a student could get this wrong if they don’t understand wound-rotor or squirrel-cage motors. But the question is simple enough that it’s pretty clear which one is the problem. Also, I have a record of the previous unit on squirrel-cage motors; both the student and I can look back at that to find out if their problem is there.
Synchronous, split-phase, and universal motors require a solid understanding of power factor, reflected load, and various ideas about magnetism (which the students haven’t seen since last year, and never in this context) so that puts them at level-3.
Quiz question: Synchronous motors can be used to correct power factor. Explain in 1-2 sentences how this is possible.
The level-4 skill in this unit is to evaluate a type of motor for a given application.
Quiz questions: “Recommend a motor for [scenario]. Explain why.” Or “you need to replace a 3-phase AC motor. Give 3 questions you should ask to help you select the best type. Explain why.”
Why this is an improvement over last year
Last year I would have put only the level-4 problem on a test. The solutions were either excellent or incoherent. I couldn’t help people get better, and they couldn’t help themselves.
Level 5 Questions
You’ll notice that there are no level 5 skills on the skill sheet, even though the unit is graded out of 5. Level 5 is what others might call “Mastery,” where Level 4 might be called “Proficiency.” I teach up to Level 4, and that’s an 80%. A Level 5 question is the name I give to questions that are not exercises but actually problems for most of the class. There are a number of ways to get a 5/5. All of them include both synthesis and a context that was not directly taught in class. So the main difference between L4 and L5 isn’t synthesis; it’s problem-solving.
I occasionally put level-5 questions on quizzes; but not every quiz. I might do it to introduce a new unit, or as a way of touching on some material that otherwise we won’t have time for. Other ways to earn a level 5: research a topic I haven’t taught and present it to me, or to the class. Build something. Fix something. I prefer these to quiz questions; they’re better experience. So I put examples of project topics on the skill sheet. I also encourage students to propose their own topics. Whether they use my topics or theirs, they have to decide what exactly the question is, how they will find the answer, and how they will demonstrate their skill. We’ve had a ton of fun with this. I’ve sometimes put questions on quizzes that, if no one solved them, could be taken into the shop and worked on at your leisure.
I wrote lots in this post about level-5 questions that are independent projects, not quiz questions. But I didn’t give any examples of level-5 questions that are on quizzes, so here are a few.
This is a reduced-voltage manual starter on a DC shunt motor. If I gave this question now, it would be trivial, because we’ve done a whole unit on starters. But it was on a the second quiz of the semester, when the students had barely wrapped their heads around DC motors. It’s a conceptually tough question because the style of drafting is unfamiliar to my students, there’s an electromagnet sealing-in the switch that doesn’t make sense unless you’re thinking ahead to safety hazards caused by power failures, and we hadn’t discussed the idea that there was even such a thing as a reduced-voltage starter. But we had discussed the problem of high current draw on startup, and the loading effect that it causes, and the dangers of sudden-startups of machinery that wasn’t properly de-energized. Those are the problems that this device is intended to solve. One student got it.
Here’s one that no one solved, but someone built later in the shop.
Draw a circuit, with a square-wave power supply, where the capacitor charges up almost instantly and discharges over the course of 17 ms.
You may use any kind of component, but no human intervention is allowed (i.e., you can’t push a button or pull out a component or otherwise interfere with the circuit). You do not need to use standard component values.
This requires time-constant switching, which means combining a diode and a capacitor. They had just learned capacitors that week in one course, and diodes the previous week in a second course. The knowledge was pretty fresh, so they weren’t really ready to use it in a flexible way yet. But the diode unit was all about time-constant switching, and it’s a hard concept to get used to, so this question got them thinking about it from another angle.
Other examples: find total impedance in a parallel circuit, when all we’ve studied so far is series circuits. If they followed the rules for parallel resistance that we studied last year, it will work out; but they had just learned vectors, many of them for the first time, so most people added the vectors (instead of adding the inverses and inverting). Or, find total impedance of a resistor-capacitor-inductor circuit, when all we’ve studied is resistors and capacitors. Amazingly, most of the class got that one. I was really impressed. Again, it’s a question where the conclusion follows logically from tools that the students already have; but they might have to hold the tool by the blade and whack the problem with what they think is the handle.
My 2nd-year class has a bad case of compliance. About half the class will do whatever I tell them to – without question. The “without question” part might be why they seem to see learning as an unknowable and uncontrollable force of the universe.
The other half of the class questions me until they’re satisfied that there’s a good reason for the activities I’ve proposed. Then they go off and do them, sometimes with extra experiments or research thrown in if unexpected results piqued their interest. I’m realizing, just past my first anniversary as a teacher, that I don’t know how to help students transition from compliance to self-direction.
I’ve been thinking about overhauling my grading scheme. I want to see my students get better at analyzing their skills and improving them (i.e. troubleshooting their education). I liked the idea of trying to create a grading system that would help them do that better. I was unsure how much homework should be worth, if anything, for lots of reasons (more in another post). Then something hit me like a punch in the head: buried at the bottom of a post on Think Thank Thunk was a comment by Ron Johnson (no hyperlink). He said “When we assign grades to things like organization, we are [rewarding] compliance, not learning.”
How often have I exhorted my students to think for themselves, to question everything, to question the user, the teacher, even the laws of physics? Then I give them grades for following orders without thinking. If they’re not getting mixed messages in all this, I’ll eat my safety boots.
Around the same time, I saw the results from the last test. Out of twelve students in the 2nd-year class, I had the world’s worst bimodal distribution: seven As and five Fs. Something had to give.
I don’t think I’ve inflated the grades; I think they’re an accurate representation of skills. The students who got As are able to troubleshoot electronic devices, either on paper or with a voltmeter in their hands. The students who failed have lots of skills too, but are inconsistent when choosing which one to apply, or when solving problems that are different from the textbook examples, or when combining several skills together. The reason I’m going to overhaul my grading system now, not later, is that that those students also can’t seem to gauge how close to mastery they are, and have study/practice strategies that appear random (or sometimes self-flagellating). This is for them.
So, stay tuned for the new system, starting in January. I know it’s not a magic bullet, but I hope that measuring progress differently will generate some data that will help us figure out what else to do. This blog is my way to document what I find, get some feedback from colleagues and, with luck, add something to the pool of knowledge that tackles
from the perspective of vocational learning. I hope you’ll join me for the ride.