My 2nd-year students’ grades are becoming polarized. When I dig into the results, I can tell that the struggling students are regurgitating memorized information that they don’t fully understand. In order to try to reverse this trend, I’ve decided to overhaul my grading.
Since I started a year ago, I’ve been using a fairly standard grading breakdown:
- Tests (XX%)
- Labs (YY%)
- Major Project (ZZ%)
- Homework (AA%)
This accomplishes a lot, and I plan to continue to use tests, labs, projects and homework. But I want more out of my grading system — for example these things:
- Record a grade for each skill the student has/doesn’t have. Skills might or might not line up with chapters in the textbook.
- Record a student’s skill level at the end of the semester, not their skill level on the second Tuesday in February
- Inform the student of what they need to do to improve
- Make it impossible to hide shoddy technical skills behind good organization and copious note-taking
- Give an instant stomach-ache to anyone who tries to “game the system” by chasing points instead of learning (OK, maybe I’ll have to compromise on this one)
Lots of other teachers are tackling this, and a lot of them seem to have blogs. Science Education On The Edge calls it, aptly enough, “skills-based grading.” Others call it “standards-based grading”, although the signal-to-noise ratio on that phrase is pretty bad (in electronics, I know what the skills are. But an educational “standard” is a political football). Regardless, here are some examples I can relate to.
All of these grading systems revolve around breaking “Test 1” (or what have you) into smaller chunks. Each tiny piece (a skill) is then reassessed throughout the semester to find out if the student is improving or not. If they are, it gives them some ammo about how to improve their other skills. If they aren’t, it lets them know it’s time to try another tactic. Finally, a single piece of work (say, a lab report) can get two grades: maybe one for troubleshooting, one for writing. Because the teacher grades the skills individually, excellent writing doesn’t mask bad troubleshooting, and vice versa. Overall, the goal is to give students more tools for seeing learning as something understandable and controllable. Whether an individual takes on that control or not is up to them; but at least I will have taken my best shot at pointing out the path.
I’m going to try to come up with a version for AC Circuits, Solid State I, and Solid State III. I’ve learned a lot from other writers. Here’s a sample.
Pro
- A series of in-depth articles on Think Thank Thunk covering nuts and bolts of “standards-based” grading systems
- A collection of articles by various teachers discussing their (sometimes very different) approaches to grading
Con
- An engineering prof writes thoughtfully about difficulties of using standards-based grading when assessing synthesis of several skills
- Here are some parents who are really angry
Both
- Student feedback from Action-Reaction
- Student feedback from Pedagogue Padawan
Next up: a draft of my proposal.
Thanks for the plug for my blog.
Sometimes it is easy to separate things (lab writeups from problem sets). Sometimes it is hard (in a badly written lab report, can you tell whether they understood the concept?).
Agreed. This will be difficult at times.
If I’m assessing writing as one skill and, say, capacitor specifications as another skill, then the student gets a low score for writing, and a zero for capacitors (zero means nothing to assess). If their understanding of how to evaluate capacitor specifications was simply obscured by their writing, they can make an appointment to prove it (preferably in some form other than writing). If they can’t prove anything about their skill in evaluating capacitor specifications, then the zero was the correct mark.
If I’m assessing their ability to combine the two skills (write about their research into capacitor specifications), then they get a zero. *shrug* If it’s unintelligible, I can’t see giving credit for it. If I’ve designed my skills list correctly, they will have already done the two skills independently, and can look back at their scores to find out what to remediate first (caps, writing, or synthesis of the two).
So far, it passes my condition 1 and 3 above. Incidentally, I haven’t had this problem so far, but I do have the opposite problem: students using strong writing skills to stitch together information from a wide-enough variety of sources that it sounds like they know what they’re talking about when they don’t (condition 4). This will require careful design of the skills list and careful choices about how I reassess.