My plan for this semester was not to do battle with the four horsemen of the curricular apocalypse (Time, Textbooks, Tradition, and Tests). I knew they were out there, but I was ignoring them. I was going to create a smaller, simpler project for myself. One that would result in a sensible amount of sleep and possibly even the occasional pretense of a social life. I vowed that I would tackle only the grading piece of the eschatological pie, changing to what I call a “skills-based” scheme.
Now it’s a month into the semester. We’ve barely cracked the textbook or the lab book. My lesson plans have radically changed. Time-management has radically changed, for me and the students. And tests… well, they’re smaller, lower-stakes, and can often be replaced or supplemented by shop demonstrations. I didn’t mean to do it. But the changes in the grading scheme started a snowball that changed lots of other things too.
Textbooks (Or Lack Thereof)
I created a list of skills that students had to demonstrate to complete a topic unit. That meant I had to think hard about what skills are actually indispensable. That in turn made me think hard about why I teach what I teach, and why the textbook includes what it includes. I asked myself lots of questions like “Why do they need this skill? When will they need this skills? In what context will they use it?” I ended up being much more focused on our goals. Last year I questioned whether the textbook treatment had too much depth, or too little, or on the wrong things. This year I was able to start answering those questions. Now that I have more information, I can’t bring myself to not use it. That means the textbook and lab book are more like dictionaries and less like instruction manuals.
Tradition: Lesson Plans
Once I realized that the textbook didn’t lead where I wanted to go, I had to develop some lesson plans in a hurry. This rubric for application problems helped a lot. Developed by Dan Meyer for math classes, it helps students find the meaning behind the math, and connect it to what they know about the real world.
Since I’m especially concerned with synthesis and problem-solving, I’m looking for ways to help students find meaning in links between ideas. Kate Nowak’s guidelines were the best, most concrete suggestions I found.
Time and Tests
Well, you can retry a test question any Wednesday afternoon. Or, you can show your mastery of that skill by building a circuit, if you prefer — either during shop period or in open shop time on Tuesdays. This has opened up lots of interesting conversations. For one, many students have discovered gaps in their fundamental skills that neither they, nor I, suspected. A second-year student blurted out in class last week, “Is the cause on a graph always on the x-axis?!”
Having some very basic questions on the test has helped me figure out how to coach them. Some students who have never approached me for extra help are talking to me after class about why they didn’t get credit for something. Theory: if you get a small, simple question wrong, you can ask the teacher a small, simple question. If you get a big complicated problem wrong, it seems futile or maybe impossible to even figure out what question to ask. The easy questions at the beginning of the test also reduce test anxiety, I think (can’t prove this).
In order to get 100% for a unit, students must complete a more in-depth problem, develop their own problem-solving strategy, and combine two or more topics. I throw one of these questions on each test. They aren’t necessarily difficult — just unfamiliar applications of familiar skills. But they’ve become a great way to introduce a new topic. On each week’s quiz, the “Level 5” question is a simple problem from the next chapter. Result: most of the class is attempting problems that I haven’t explicitly taught yet. Even if they don’t get the right answer, the process helps them clarify their assumptions. At the end of the quiz, they’re dying to know how it works. This leads to some of our best conversations.
The students hand in an answer sheet at the end of the quiz, which I later use to data enter their scores. They keep the quiz paper, which (if they followed instructions) has all their calculations, sketches, etc. Then we immediately grade the quiz as a class. Ideally, they know instantly what they got right and what they need to work on. Realistically, they hate writing comments on their quiz papers, so they quickly forget which ones are right and which are wrong, or why they’re wrong. (Why? Is it because it forces them to face that they made a mistake?) Then, they can’t tell what they need to reassess. So, for the last test, I asked them to pass their quiz papers in to me so I could see the feedback they are writing to themselves, and write back to them about it. I was dismayed to see how many students, when forced to actually grade their papers, wrote incredibly negative comments to themselves (“Don’t rush you moron!” or “Stupid stupid stupid!”). Wow. Good for me to know, but I’m not sure how to address this, other than to write back with a comment that I won’t stand for my students being insulted in my class — not even by their past selves.
About half of my class has a hard time seeing the connections between different ideas (the rest of the class is bored to tears if we spend any time on it). It’s been hard to figure out how to handle this. But some interesting results have surfaced this month. Whether they’re due to the changes in the grading scheme etc., we’ll never know. Example: my colleague is introducing filter circuits in a very different context than the one in which I teach them. Most students don’t even recognize that it’s the same circuit at first. He had barely put the circuit on the board when a student announced, “Isn’t that just a low-pass filter?” Another student created a circuit that demonstrates time-constant switching — foreshadowing next week’s topic. Then there was the who student thought they had found a sneaky loophole in my new grading scheme. “Can I use a buffer circuit from Digital class to demonstrate that I understand op-amp gain for Solid State class?” I refrained from weeping from joy or jumping up and down. “I suppose,” I agreed.
Skills-Based Grading: Transformative learning or edu-fad?
A number of people have written about the idea that changing a grading system does not magically improve learning or teaching. That’s true. But I think it’s also true that redesigning a grading scheme while focusing on skills (or “standards” or “outcomes” or whatever they’re called) provides a lot of information that can be used to improve learning, or at least to find out where the problem areas are. For me, at least, the more of that information I had, the less I was able to continue doing what I had always done.