You are currently browsing the category archive for the ‘Synthesis’ category.
Can my students use their skills in real-world situations? Heck, can they use their skills in combination with any single other skill in the curriculum? When I was redesigning my grading system, I needed a way to find out. It’s embedded in the “levels” of skills that I use, so I’ll explain those first.
What are these “levels” you keep talking about?
For every curriculum unit, students get a “skill sheet” listing both theory and shop skills. Here’s an example of the “theory” side of a unit of my Electric Machines course. (For a complete skills sheet, showing how theory skills correspond to shop skills, and the full story of how I use them, see How I Grade). If I were starting this unit over, I would improve the descriptions of each skill (“understand X, Y, and Z” isn’t very clear to the students) and make the formats consistent (the first four are noun phrases, the last one is a complete sentence; things like that annoy me). But this should give enough info to illustrate.
So, about synthesis…
Realistically, all skills involve synthesis. The levels indicate complexity of synthesis, not whether synthesis is involved at all. My goal is to disaggregate skills only as far as I need to figure out what they need to improve — and no further.
For example, in the unit shown above, wound-rotor induction motors are at level-2. That’s because they’re functionally almost identical to squirrel-cage motors, which we studied in the previous unit, and the underlying concepts help students understand the rest of the unit.
Quiz question: List one advantage and one disadvantage of wound-rotor induction motors compared to squirrel-cage motors.
Danger: a student could get this wrong if they don’t understand wound-rotor or squirrel-cage motors. But the question is simple enough that it’s pretty clear which one is the problem. Also, I have a record of the previous unit on squirrel-cage motors; both the student and I can look back at that to find out if their problem is there.
Synchronous, split-phase, and universal motors require a solid understanding of power factor, reflected load, and various ideas about magnetism (which the students haven’t seen since last year, and never in this context) so that puts them at level-3.
Quiz question: Synchronous motors can be used to correct power factor. Explain in 1-2 sentences how this is possible.
The level-4 skill in this unit is to evaluate a type of motor for a given application.
Quiz questions: “Recommend a motor for [scenario]. Explain why.” Or “you need to replace a 3-phase AC motor. Give 3 questions you should ask to help you select the best type. Explain why.”
Why this is an improvement over last year
Last year I would have put only the level-4 problem on a test. The solutions were either excellent or incoherent. I couldn’t help people get better, and they couldn’t help themselves.
Level 5 Questions
You’ll notice that there are no level 5 skills on the skill sheet, even though the unit is graded out of 5. Level 5 is what others might call “Mastery,” where Level 4 might be called “Proficiency.” I teach up to Level 4, and that’s an 80%. A Level 5 question is the name I give to questions that are not exercises but actually problems for most of the class. There are a number of ways to get a 5/5. All of them include both synthesis and a context that was not directly taught in class. So the main difference between L4 and L5 isn’t synthesis; it’s problem-solving.
I occasionally put level-5 questions on quizzes; but not every quiz. I might do it to introduce a new unit, or as a way of touching on some material that otherwise we won’t have time for. Other ways to earn a level 5: research a topic I haven’t taught and present it to me, or to the class. Build something. Fix something. I prefer these to quiz questions; they’re better experience. So I put examples of project topics on the skill sheet. I also encourage students to propose their own topics. Whether they use my topics or theirs, they have to decide what exactly the question is, how they will find the answer, and how they will demonstrate their skill. We’ve had a ton of fun with this. I’ve sometimes put questions on quizzes that, if no one solved them, could be taken into the shop and worked on at your leisure.
I wrote lots in this post about level-5 questions that are independent projects, not quiz questions. But I didn’t give any examples of level-5 questions that are on quizzes, so here are a few.
This is a reduced-voltage manual starter on a DC shunt motor. If I gave this question now, it would be trivial, because we’ve done a whole unit on starters. But it was on a the second quiz of the semester, when the students had barely wrapped their heads around DC motors. It’s a conceptually tough question because the style of drafting is unfamiliar to my students, there’s an electromagnet sealing-in the switch that doesn’t make sense unless you’re thinking ahead to safety hazards caused by power failures, and we hadn’t discussed the idea that there was even such a thing as a reduced-voltage starter. But we had discussed the problem of high current draw on startup, and the loading effect that it causes, and the dangers of sudden-startups of machinery that wasn’t properly de-energized. Those are the problems that this device is intended to solve. One student got it.
Here’s one that no one solved, but someone built later in the shop.
Draw a circuit, with a square-wave power supply, where the capacitor charges up almost instantly and discharges over the course of 17 ms.
You may use any kind of component, but no human intervention is allowed (i.e., you can’t push a button or pull out a component or otherwise interfere with the circuit). You do not need to use standard component values.
This requires time-constant switching, which means combining a diode and a capacitor. They had just learned capacitors that week in one course, and diodes the previous week in a second course. The knowledge was pretty fresh, so they weren’t really ready to use it in a flexible way yet. But the diode unit was all about time-constant switching, and it’s a hard concept to get used to, so this question got them thinking about it from another angle.
Other examples: find total impedance in a parallel circuit, when all we’ve studied so far is series circuits. If they followed the rules for parallel resistance that we studied last year, it will work out; but they had just learned vectors, many of them for the first time, so most people added the vectors (instead of adding the inverses and inverting). Or, find total impedance of a resistor-capacitor-inductor circuit, when all we’ve studied is resistors and capacitors. Amazingly, most of the class got that one. I was really impressed. Again, it’s a question where the conclusion follows logically from tools that the students already have; but they might have to hold the tool by the blade and whack the problem with what they think is the handle.
In the new grading system, the skills list for each unit ends at 4/5. Any student who wants a 5/5 must apply their skills to a novel context (not explicitly taught in class), choose their own problem-solving strategy, and combine ideas from at least two units.
I put a L5 question on a quiz at least once per unit as a way to assess problem-solving and synthesis. They’re doing that quite well. But they have had a host of unexpectedly positive benefits for the class. Top 10 reasons I love the “L5 question”:
1. I can put anything on a quiz. Since L5 questions by definition include synthesis, the students understand that anything is fair game: skills we’ve learned in other units, in co-requisite courses, in pre-requisite courses. So L5 questions free me from the compartmentalization that the skills-based grading scheme might otherwise enforce.
2. Students use it to practise “trying something” even though they don’t know the right answer. A L5 question on a quiz feels like a bonus question, so there’s less stigma attached to getting them wrong. Unlike other levels, your score on L5 questions can not go down. So, you can write any wacky thing that goes through your head, and there’s no penalty. I give 30 minutes for quizzes, and deliberately choose the questions so that even the slower students finish in about 25 minutes. That means there’s nothing left to do except think about the L5 question. This helps students practice creating representations, choosing symbols, and thinking about unfamiliar things in a low-stakes environment. (Who would have thought that a quiz would become a low-stakes environment??)
3. It’s great for introducing a new unit. Since every unit builds on the previous one, a student who has mastered the tricky questions from Unit 1 probably has all the skills to do the easy questions from Unit 2, if they can figure out how to apply them. I throw these on the quiz and one of two things happen: some students get them right, in which case they’re primed to make sense of the new unit; some students get them wrong, in which case I’m introducing Unit 2 at the exact moment when they’re dying of curiosity to know how it works.
4. It doesn’t have to go on a quiz. A L5 question can be a research project or an invention or a presentation to the class or an interpretive dance or a graphic novel, if it meets the synthesis/problem-solving criteria.
5. It’s a great response to tangential questions in class (“Interesting, I’m not sure of the answer… How could you find out? Sounds like a great L5 question.”)
6. It’s a good way bring up neat topics that don’t quite fit in the curriculum. I make a list of some of them at the bottom of each skill sheet. Any student who is curious can learn more about one of those topics. It’s then up to them to propose both a question and the assessment of its answer.
7. It’s an instant way to incorporate fix-it projects, service-learning opportunities, and inter-program collaborations that cross my desk every semester.
- The head chef from the culinary program went to Europe and fried the power supply of his fancy sous-vide cooker, so a student traced the problem, selected and ordered a replacement for the obsolete part, and put it back together.
- A student in Disability Services needs help building a rehabilitative technology toy for developing fine-motor skills, so a team of four 1st-year students are working together to help him out.
- The Academic Chair’s Roomba isn’t finding its dock properly anymore. I ask for volunteers, and voila — Level 5 question.
I don’t need the thing to work at the end; I expect the student to have developed a sensible problem-solving strategy and synthesized their skills. (Proving to me that it shouldn’t be fixed — for economic or other reasons — might be perfectly legitimate. It depends on whether you have enough evidence to convince me).
8. The students are free to propose a problem. About anything. As long as it requires them to synthesize and problem-solve. They can bring in something broken from home and work on it. They can decide to experiment with something they read about in a trade journal or diy magazine.
- The other day a student completed his assigned exercise early (using an inductor to light a 120V lamp using a 12V supply). So he went out to the parking lot, removed the relay from his trunk latch, wired it into the lamp circuit as a crude boost chopper, and used a signal generator to energize the relay fast enough to make it look like the light was on continuously.
- Two students figured out how to test a transistor before I taught the unit — so they asked for permission to destroy one to test their algorithm. I agreed, on the condition that they teach their methods to the class.
The assessments don’t have to be involved or time-consuming; they just have to deepen a student’s thinking. About 3/4 of my students have at least one L5 question.
9. They are a built-in back-up plan for students who finish their work in class early.
10. The students get stoked about them.
My plan for this semester was not to do battle with the four horsemen of the curricular apocalypse (Time, Textbooks, Tradition, and Tests). I knew they were out there, but I was ignoring them. I was going to create a smaller, simpler project for myself. One that would result in a sensible amount of sleep and possibly even the occasional pretense of a social life. I vowed that I would tackle only the grading piece of the eschatological pie, changing to what I call a “skills-based” scheme.
Now it’s a month into the semester. We’ve barely cracked the textbook or the lab book. My lesson plans have radically changed. Time-management has radically changed, for me and the students. And tests… well, they’re smaller, lower-stakes, and can often be replaced or supplemented by shop demonstrations. I didn’t mean to do it. But the changes in the grading scheme started a snowball that changed lots of other things too.
Textbooks (Or Lack Thereof)
I created a list of skills that students had to demonstrate to complete a topic unit. That meant I had to think hard about what skills are actually indispensable. That in turn made me think hard about why I teach what I teach, and why the textbook includes what it includes. I asked myself lots of questions like “Why do they need this skill? When will they need this skills? In what context will they use it?” I ended up being much more focused on our goals. Last year I questioned whether the textbook treatment had too much depth, or too little, or on the wrong things. This year I was able to start answering those questions. Now that I have more information, I can’t bring myself to not use it. That means the textbook and lab book are more like dictionaries and less like instruction manuals.
Tradition: Lesson Plans
Once I realized that the textbook didn’t lead where I wanted to go, I had to develop some lesson plans in a hurry. This rubric for application problems helped a lot. Developed by Dan Meyer for math classes, it helps students find the meaning behind the math, and connect it to what they know about the real world.
Since I’m especially concerned with synthesis and problem-solving, I’m looking for ways to help students find meaning in links between ideas. Kate Nowak’s guidelines were the best, most concrete suggestions I found.
Time and Tests
Well, you can retry a test question any Wednesday afternoon. Or, you can show your mastery of that skill by building a circuit, if you prefer — either during shop period or in open shop time on Tuesdays. This has opened up lots of interesting conversations. For one, many students have discovered gaps in their fundamental skills that neither they, nor I, suspected. A second-year student blurted out in class last week, “Is the cause on a graph always on the x-axis?!”
Having some very basic questions on the test has helped me figure out how to coach them. Some students who have never approached me for extra help are talking to me after class about why they didn’t get credit for something. Theory: if you get a small, simple question wrong, you can ask the teacher a small, simple question. If you get a big complicated problem wrong, it seems futile or maybe impossible to even figure out what question to ask. The easy questions at the beginning of the test also reduce test anxiety, I think (can’t prove this).
In order to get 100% for a unit, students must complete a more in-depth problem, develop their own problem-solving strategy, and combine two or more topics. I throw one of these questions on each test. They aren’t necessarily difficult — just unfamiliar applications of familiar skills. But they’ve become a great way to introduce a new topic. On each week’s quiz, the “Level 5” question is a simple problem from the next chapter. Result: most of the class is attempting problems that I haven’t explicitly taught yet. Even if they don’t get the right answer, the process helps them clarify their assumptions. At the end of the quiz, they’re dying to know how it works. This leads to some of our best conversations.
The students hand in an answer sheet at the end of the quiz, which I later use to data enter their scores. They keep the quiz paper, which (if they followed instructions) has all their calculations, sketches, etc. Then we immediately grade the quiz as a class. Ideally, they know instantly what they got right and what they need to work on. Realistically, they hate writing comments on their quiz papers, so they quickly forget which ones are right and which are wrong, or why they’re wrong. (Why? Is it because it forces them to face that they made a mistake?) Then, they can’t tell what they need to reassess. So, for the last test, I asked them to pass their quiz papers in to me so I could see the feedback they are writing to themselves, and write back to them about it. I was dismayed to see how many students, when forced to actually grade their papers, wrote incredibly negative comments to themselves (“Don’t rush you moron!” or “Stupid stupid stupid!”). Wow. Good for me to know, but I’m not sure how to address this, other than to write back with a comment that I won’t stand for my students being insulted in my class — not even by their past selves.
About half of my class has a hard time seeing the connections between different ideas (the rest of the class is bored to tears if we spend any time on it). It’s been hard to figure out how to handle this. But some interesting results have surfaced this month. Whether they’re due to the changes in the grading scheme etc., we’ll never know. Example: my colleague is introducing filter circuits in a very different context than the one in which I teach them. Most students don’t even recognize that it’s the same circuit at first. He had barely put the circuit on the board when a student announced, “Isn’t that just a low-pass filter?” Another student created a circuit that demonstrates time-constant switching — foreshadowing next week’s topic. Then there was the who student thought they had found a sneaky loophole in my new grading scheme. “Can I use a buffer circuit from Digital class to demonstrate that I understand op-amp gain for Solid State class?” I refrained from weeping from joy or jumping up and down. “I suppose,” I agreed.
Skills-Based Grading: Transformative learning or edu-fad?
A number of people have written about the idea that changing a grading system does not magically improve learning or teaching. That’s true. But I think it’s also true that redesigning a grading scheme while focusing on skills (or “standards” or “outcomes” or whatever they’re called) provides a lot of information that can be used to improve learning, or at least to find out where the problem areas are. For me, at least, the more of that information I had, the less I was able to continue doing what I had always done.
Update: the most recent version of my grading policy has its own page, “How I Grade,” on a tab above.
The new assessment and reporting plan is done… for now. Here’s the status so far.
The Rubric — Pro
If you score some level-3 or level-4 questions, you don’t get credit for them until you’ve finished the level-2 skills. It doesn’t invalidate the more advanced work you’ve done; you don’t have to necessarily do it all over again — it’s sort of held in the bank, to be cashed in once the level 2 stuff is complete. It doesn’t penalize those who choose a non-linear path, but it doesn’t let basic skills slip through the cracks.
Choosing Skills — Con
Oh boy, this is definitely ridiculous. As you can see, there are way too many. It actually got worse since my first draft, peaking in version 0.6 and coming back down in the one linked above. These guidelines helped me beat it back. I’m telling myself that the level 2 skills will repeat in each topic, and that it won’t end up being 100 items in my gradebook. On the other hand, this program has 6 semesters-worth of material crammed into 4 semesters-worth of time. It is like being carpet-bombed with information. And yet, when our grads get hired, there is always more their employers wish they knew. The previous grading system didn’t create the problem; this new system will not solve it. The whole project would be frankly impossible without SBG Gradebook, so bottom-of-my-heart thanks to Shawn Cornally and anyone else involved.
Re-assessment — Pro
Re-assessment can be initiated by me (quizzes) or by the student (by showing me that they’ve done something to improve). Grades can go down as well as up. I took to heart the suggestions by many people that one day per week should be chosen for reassessment. We’re blessed with 3-hour shop periods, which is typically more time than the students need to get a lab exercise done. So shop period isn’t just for reassessing shop things any more; you can also reassess written things then too.
Synthesis — We’ll see
Some synthesis skills I consider essential, like “determine whether the meter or the scope is the best tool for a given measurement”. Those are level-3 skills, with their individual parts included as level-2 skills. That means you have to do them to pass. It also means I have to explicitly teach students not only how to use a scope and a meter, but how to “determine“. Seriously, they don’t know. Sometimes I weep in despair that it’s possible to graduate from high school, maybe even get a job, work for a few years, have a couple of kids, and still not know how to make a decision strategically. (Or at least, not be able to call on that skill while you are physically located inside a classroom). Other days I stop tilting at windmills and start teaching it, helping students recognize situations where they have already done it, and trying to convince them that in-school and everywhere-else are not alternate universes.
Other forms of synthesis are ways of demonstrating excellence but not worth failing someone over; these become level-4 or 5 skills. It still tells the student where they are strong and where they can improve. It tells me and their next-semester teachers how much synthesis they’ve done. That’s all I need.
This directly contradicts my earlier plan to let students “test out” of a skill. But, because level 2 and level 5 are now different skills, I don’t have to write 5 versions of the question for each skill. I think that brings the workload (for the students and me) back down to a reasonable level, allowing me to reassess throughout the term. The quizzes are so cumulative that I don’t think an exam would add anything to the information.
Retention — Too soon to tell
It’s important to me to know how you’re doing today, not last month. That means I reserve the right to reassess things any time, and your score could very well go down. This is bound up with the structure of the course: AC Circuits has 6 units, each of which builds directly on the previous one (unlike a science class where, for example, unit 1 might be atoms and unit 2 might be aardvarks). Con: a missed skill back in unit 1 will mess you up over and over. Pro: provides lots of practise and opportunities to work the same skill from different angles. With luck, Unit 5 will give you some insight on Unit 2 and allow you to go back and fix it up if needed.
Feedback — Pro, I think
This will be tough, because there’s not enough time. The concepts in these courses are complex and take a long time to explain well. The textbook is a good reference for looking up things you already know but not much good at explaining things you don’t know. That means I talk a lot in class. At best, I get the students participating in conversations or activities or musical re-enactments (don’t laugh — “Walk like… an ee-lec-tron” is one of my better lesson plans) but it leaves precious little time for practice problems. I’ll try to assign a couple of problems per night so we can talk about them in class without necessarily doing them in class.
I’ve also folded in extra feedback to this “weekly portfolio” approach I stole from Jason Buell. Each student has a double-pocket folder for their list of topic skills. There are a couple of pieces of looseleaf in the brads too. When they’ve got something they either want feedback on (maybe some especially-troublesome practice problems that we didn’t have time to review in class) or that they want to submit, they can write a note on the looseleaf, slide the documentation into the pocket, and leave it in my mailbox. I either do or do not agree that it sufficiently demonstrates skills X, Y, and Z, and write them back. We did a bit of this with a work-record book last semester, and the conversations we had in writing were pretty cool. I’m looking forward to the “message-board” as our conversation goes back and forth. I hope to keep the same folders next year, so we can refer back to old conversations.
It’s official: I submitted the new plan for feedback and grading.
Since V0.1, I have gone through about 7 more versions, experimenting with a dizzying array of variables. My final result is pretty different from my original thoughts, but I think I’ve struck a balance I’m happy with.
I chose a “topic-oriented” scheme ripped off in large part from Always Formative, because it clarifies which skills go together. I also used his scoring system: you can’t get a 3 until you’ve completed all the level 2 stuff (although I switched to a 5-point system, for reasons that have not changed since this post). I like this setup because it suggests an order in which to tackle things. At the same time, it doesn’t prevent you from attempting harder problems, or recognizing that you’ve already done harder problems. So, it has scaffolding and flexibility at the same time.
Here’s what the tracking sheet looks like for the topic called AC measurement (a first-year course). Note that the students get the first two pages; the third page is a bank of level-5 questions that I may use if students ask for them.
Skills Are a Yes-Or-No Question
I also like the “yes or no” approach to the skills. Each skill is not graded on a rubric; you’ve either demonstrated the skill or you have not. I think this will grades feel less like a “moving target” to the students, cutting down on time-wasters like “I got that question mostly right so I should get 4/5 instead of 3/5.” Now, that question is a skill. You either demonstrated that you have the skill, in which case you get a YES; or you did not, in which case you try again.
Finally, I chose this system because it is similar to the frankly excellent system that was set up by my predecessor. When I started a year ago, the students were already working at their own pace, each on their own project, demonstrating when they were ready, and re-demonstrating if they weren’t satisfied with the first time… but only in the shop. I’m looking forward to extending those benefits to our classroom time.
“25 students, aged eight to ten years old have become the youngest scientists ever to be published in the prestigious Royal Society journal Biology Letters.
Their findings report that buff-tailed bumblebees can learn to recognize nourishing flowers based on colors, patterns and spatial relationships… ‘This experiment is important, because, as far as we know, no one in history (including adults) has done this experiment before.’ Also, ‘It tells us that bees can learn to solve puzzles (and if we are lucky we will be able to get them to do Sudoku in a couple of years’ time).’ ”
Dear kids of Blackawton Elementary School, I bow before your superior awesomeness.
New grading strategy, part I: a grading rubric.
I was going to tackle the list of skills first. In fact I did. Then I found myself waffling over how much info makes up a skill. If the pieces are too small, there will be a thousand of them to grade. If they’re too big, it will be impossible to summarize them in a single grade. I kept thinking about whether it was assessable — which meant knowing how I would assess.
So I decided to try the rubric first. Here’s a bit of blogosphere roundup:
|Max Score||To get full marks:|
|Continuous Everywhere but Differentiable Nowhere||4||Demonstrate the skill
Use algebra correctly
Use correct notation
|Teaching Statistics||4||Demonstrate skill
Solve a complex problem
Solve independently (lower scores for solving with assistance
|Brain Open Now||4||Demonstrate the skill
Use algebra correctly
Use correct notation
Draw valid conclusions
|MeTA Musings||4||Demonstrate full understanding|
|Sarcasymptote||5||Demonstrate comprehensive knowledge
Use it in novel situations
|Point of Inflection||4||Demonstrate mastery
Connect to other skills
A lot of rubrics differentiate the score based on algebra (3 if you get part of the idea but make conceptual errors; 4 if you get the idea but make algebra errors). This makes sense in a math class, but since I’ll be testing things like “can predict circuit behaviour”, I’m tempted to make algebra a separate standard. If you make an algebra mistake that’s serious enough to draw wrong conclusions from, it’s a conceptual error. But a lot of algebra mistakes have such small effects on the goal that they don’t change your predictions or problem-solving strategy. There’s a whole other course that tests algebra, so maybe it’s wasteful to make it the main criterion in my grading scheme. I think writing is also a separate standard. There will be chances to assess the students’ ability to integrate their own writing with their own measurements/troubleshooting as part of project work (which I think I’ll grade on a separate rubric).
It’s possible to score based on how much assistance someone needs, but I’ll be assessing only in situations where assistance is not available. “Demonstrate full understanding” and other similar language like “proficient” is not specific enough for me — I worry that students would have no definitive way of knowing what I consider proficient, short of asking me a million times a week (causes anxiety for them; causes insanity for me).
The last two, though, seem reasonable to me. What I’m really testing — if I had to pick one thing — is ability to apply the knowledge to a situation that’s not exactly like anything in the textbook (though it might be a mix-and-match combination of previous problems). This is also the hardest thing for students to understand. I get a lot of pushback from students who are angry that I’m testing them on things I “haven’t taught” them — this despite practise problems, in-class time Q&A or group work, and hands-on activities practising the skills from various angles. There’s a high correlation between that attitude and the Fs that came through on the last test. So I’d like clarify that, in fact, that is what I teach… and what I expect them to aim for.
I’m tempted to use a 5 point system. Someone wrote recently that they didn’t like 5 because it gave a “middle ground” score of 3 that was neither good nor bad. I disagree: there is always a score of 0. So it’s the 4-point scale that has a fence-sitting score. In the final weighting, 2/4 will probably translate into 50%, which is the cutoff grade for supplemental exams. That’s definitely something I want to avoid, both for my sanity and the students’. They will want to know which side of passing (60%) they’re on: a 3 means yes, a 2 means “not close”. So I think my synthesis so far is this:
- understands something about the concept
- understands lots of things about the concept but not everything
- gets the concept, has problems with strategy or application of skill
- applies the skill to a previously practised situation
- chooses appropriate concepts to combine with and/or applies to novel situations
The wording needs some work, but basically breaks down the way I want it to. Assuming I’ll average these for the final grade, a 4 is 80% (this is technically considered “honours” by the school, so I might have to make this worth 79% or something like that). To get a 5 you’d have to do something cool. If you did something cool and it mostly worked, I’d be ok giving a 4.5. Some students will probably be confused about what the “appropriate” other skills are for getting a 5, but I think I can draw up some guidelines for that. On the other end of the scale, if you don’t have a good grip on what the point is, you are below 60%. Seems sensible so far.
Next up: some really interesting ideas I’ve stumbled across for evaluating rubrics. I’ll put this one through the ringer and see what survives.