You are currently browsing the category archive for the ‘Rubric’ category.
Last month, I was asked to give a 1hr 15 min presentation on peer assessment to a group of faculty. It was part of a week-long course on assessment and evaluation. I was pretty nervous, but I think I managed to avoid most of the pitfalls. The feedback was good and I learned a lot from the questions people asked.
Some Examples of Feedback
“Hopefully by incorporating more peer assessment for the simple tasks will free up more of my time to help those who really need it as well as aiding me in becoming more creative instead of corrective”
“You practiced what you were preaching”
“The forms can be changed and used in my classes”
“Great facilitator — no jargon, plain talk, right to the point! Excellent. Very useful.”
“You were great! I like you! Good job! (sorry about that) “
“Although at first, putting some of the load on the learner may seem lazy on the part of the instructor, in actual fact, the instructor may then be able to do even more hands on training, and perhaps let thier creativity blossom when unburdened by “menial tasks”.”
“Needed more time”
“Good quality writing exercise was a bit disconnected”
“Finally a tradeswoman who can relate to the trades”
In a peer assessment workshop, participants’ assessments of me have the interesting property of also assessing them. The comments I got from this workshop were more formative than I’m used to — there were few “Great workshop” type comments, and more specific language about what exactly made it good. Of course, I loved the humour in the “You were great” comment shown above – if someone can parody something, it’s pretty convincing evidence of understanding. I also loved the comment about before-thinking and after-thinking, especially the insight into the fear of being lazy, or being seen as lazy.
Last but not least, I got a lot of verbal and non-verbal feedback from the tradespeople in the room. They let me know that they were not used to seeing a tradesperson running the show, and that they really appreciated it. It reinforced my impressions about the power of subtle cues that make people feel welcome or unwelcome (maybe a post for another day).
- Peer assessment is a process of having students improve their work based on feedback from other students
- To give useful feedback, students will need clear criteria, demonstrations of how to give good feedback, and opportunities for practice
- Peer assessment can help students improve their judgement about their own work
- Peer assessment can help students depend less on the teacher to solve simple problems
- Good quality feedback should include a clear statement of strengths and weaknesses, give specific ideas about how to improve, and focus on the student’s work, not their talent or intelligence
- Feedback based on talent or intelligence can weaken student performance, while feedback based on their work can strengthen it
I distributed this handout for people to follow. I used three slides at the beginning to introduce myself (via the goofy avatars shown here) and to show the agenda.
I was nervous enough that I wrote speaking notes that are almost script-like. I rehearsed enough that I didn’t need them most of the time.
Avoiding Pitfall #1: People feeling either patronized or left behind
I started with definitions of evaluation and assessment, and used flashcards to get feedback from the group about whether my definitions matched theirs. I also gave everyday examples of assessment (informal conversations) and evaluation (quizzes) so that it was clear that, though the wording might sound foreign, “evaluation” and “assessment” were everyday concepts. There were definitely some mumbled “Oh! That’s what they meant” comments coming from the tables, so I was glad I had taken a few minutes to review. At the same time, by asking people if my definitions agreed with theirs, I let them know that I knew they might already have some knowledge.
After introducing myself and the ideas, I asked the participants to take a few minutes to write if/how they use peer assessment so far, and what questions they have about peer assessment. Questions fell into these categories:
- How can I make sure that peer assessment is honest and helpful, not just a pat on the back for a friend, or a jab at someone they don’t like, or lashing out during a bad day?
- What if students are too intimidated/unconfident to share their work with their peers? (At least one participant worried that this could be emotionally dangerous)
- Why would students buy in — what’s in it for the assessor?
- When/for what tasks can it be used?
- Logistics: does everyone participate? Is it required? Should students’ names be on it? Should the assessment be written?
- How quick can it be? We don’t have a lot of time for touchy-feely stuff.
- Can this work with individualized learning plans, where no two students are at the same place in the curriculum?
I really didn’t see these questions coming. I was struck by how many people worried that peer assessment could jeopardize their students’ emotional well-being. That point was raised by participants ranging from School of Trades to the Health & Human Services faculty.
It dawned on me while I was standing there that for many people, their only experience of peer assessment is the “participation” grade they got from classmates on group projects, so there is a strong association with how people feel about each other. I pointed that out, and saw lots of head nodding.
Then I told them that the kind of peer assessment I was talking about specifically excluded judging people’s worth or discussing the reviewer’s feelings about the reviewee. It also wasn’t about group projects. We were going to assess solder joints, and I had never seen someone go home crying because they were told that a solder joint was dirty. It was not about people’s feelings. It was about their work.
I saw jaws drop. Some School of Trades faculty actually cheered. It really gave me pause. In these courses, and in lots of courses about education, instructors encourage us to “reflect,” and assignments are often “reflective pieces.” I have typically interpreted “reflect” to mean “assess” — in other words, analyze what went well, what didn’t, why, and what to do about it. My emotions are sometimes relevant to this process, and sometimes not. I wonder how other people interpret the directive to “reflect.” I’m starting to get the impression that at least some people think that instructors require them to “talk about your emotions,” with little strategy about why, what distinguishes a strong reflection from a weak one, or what it is supposed to accomplish.
How to get honest peer assessments?
I talked briefly about helping students generate useful feedback. One tactic that I used a lot at the beginning of the year was to collect all the assessments before I handed them to the recipient. The first few times, I wrote feedback on the feedback, passed it back to the reviewer, and had them do a second draft (based on definite criteria, like clarity, consistency, causality). Later, I might collect and read the feedback before giving it back to the recipient. I never had a problem with people being cruel, but if that had come up, it would have been easy enough to give it back to the reviewer (and have a word with them).
Another way to lower the intimidation factor is to have everyone assess everyone. This gives students an incentive to be decent and maybe a bit less clique-ish, since all their classmates will assess them in return. It also means that, even if they get some feedback from one person that’s hard to take, they will likely have a dozen more assessments that are quite positive and supportive.
Students are reluctant to “take away points” from the reviewee, so it helps that this feedback does not affect the recipient’s grade at all. It does, however, affect the reviewer’s grade; reviewing is a skill on the skill sheet, so they must complete it sooner or later. Students are quick to realize that it might as well be sooner. Also, I typically do this during class time, so I had a roughly 100% completion rate last year.
How to get useful peer assessments?
I went ahead with my plan to have workshop participants think about solder joints. A good solder joint is shiny, smooth, and clean. It has to meet a lot of other criteria too, but these three are the ones I get beginning students to focus on. I showed a solder joint (you can see it in the handout) and explained that it was shiny and clean but not smooth.
Then I directed the participants to an exercise in the handout that showed 8 different versions of feedback for that joint (i.e. “This solder joint is shiny and clean, but not smooth”), and we switched from assessing soldering to assessing feedback. I asked participants to work through the feedback, determining if it met these criteria:
- Identifies strengths and weaknesses
- Gives clear suggestion about what to do next time
- Focusses on the student’s work, not their talent or intelligence
We discussed briefly which feedback examples were better than others (the example I gave above meets criteria 1 and 3, but not 2). This got people sharing their own ideas about what makes feedback good. I didn’t try to steer toward any consensus here; I just let people know if I understood their point or not. Very quickly, we were having a substantive discussion about quality feedback, even though most people had never heard of soldering before the workshop. I suggested that they try creating an exercise like this for their own classroom, as a way of clarifying their own expectations about feedback.
Avoiding Pitfall #2: This won’t work in my classroom
Surprisingly, this didn’t come up at all.
I came back often to the idea that there are things students can assess for each other and there are things they need us for. I made sure to reiterate often that each teacher would be the best judge of which tasks were which in their discipline. I also invited participants to consider whether a student could fully assess that task, or could they only assess a few of the simpler criteria? Which criteria? What must the students necessarily include in their feedback? What must they stay away from, and how is this related to the norms of their discipline? We didn’t have time to discuss this. If you were a participant in the workshop and you’re reading this, I’d love to hear what you came up with.
Pitfall #3: Disconnected/too long
Well, I wasn’t able to avoid this. After talking about peer assessments for soldering and discussing how that might generalize to other performance tasks, I had participants work through peer assessment for writing. I told them that their classmate Robin Moroney had written a summary of a newspaper article (which is sort of true — the Wall Street Journal published Moroney’s summary of Po Bronson’s analysis of Carol Dweck’s research), and asked them to write Robin some feedback. They used a slightly adjusted version of the Rubric for Assessing Reasoning that I use with my students (summarize, connect to your own experience, evaluate for clarity, consistency, causality). We didn’t really have time to discuss this, so Dweck’s ideas got lost in the shuffle, and I was only able to nod toward the questions we’d collected at the beginning, encouraging people to come talk afterwards if their questions hadn’t been fully answered.
Questions that didn’t get answered:
Some teachers at the college use an “individualized system of instruction” — in other words, it is more like a group tutoring session than a class. The group meets at a specified time but each student is working at their own pace. I didn’t have time to discuss this with the teacher who asked, but I wonder if the students would benefit from assessing “fake” student work, or past students’ work (anonymized), or the teacher’s work?
One teacher mentioned a student who was adamant that peer assessment violated their privacy, that only the teacher should see it. I never ran into this problem, so I’m not sure what would work best. A few ideas I might try: have students assess “fake” work at first, so they can get the hang of it and get comfortable with the idea, or remove names from work so that students don’t know who they’re assessing. In my field, it’s pretty typical for people to inspect each other’s work; in fields where that is true, I would sell it as workplace preparation.
We didn’t get a chance to flush out decision-making criteria for which tasks would benefit from peer assessment. My practice has been to assign peer assessment for tasks where people are demonstrating knowledge or skill, not attitude or opinion. Mostly, that’s because attitudes and opinions are not assessable for accuracy. (Note the stipulative definitions here… if we are discussing the quality of reasoning in a student’s work, then by definition the work is a judgment call, not an opinion). I suppose I could have students assess each other’s opinions and attitudes for clarity — not whether your position is right or wrong, but whether I can understand what your position is. I don’t do this, and I guess that’s my way of addressing the privacy aspect; I’d have to have a very strong reason before I’d force people to share their feelings, with me or anyone else.
Obviously I encourage students to share their feelings in lots of big and small ways. In practice, they do — quite a lot. But I can’t see my way clear to requiring it. Partly it’s because that is not typically a part of the discipline we’re in. Partly it’s because I hate it, myself. At best, it becomes inauthentic. The very prospect of forcing people to share their feelings seems to make them want to do it less. It also devalues students’ decision-making about their own boundaries — their judgment about when an environment is respectful enough toward them, and when their sharing will be respectful toward others. I’m trying to help them get better at making those decisions themselves — not make those decisions for them. Talking about this distinction during peer assessment exercises gives me an excuse to discuss the difference between a judgment and an opinion. Judgments are fair game, and must be assessed for good-quality reasoning. Opinions are feelings are not. We can share them and agree or disagree with them, but I don’t consider that to be assessment.
Finally, a participant asked about how to build student buy-in. Students might ask, what’s in it for me? What I’ve found is that it only takes a round or two of peer assessments for students to start looking forward to getting their feedback from classmates. They read it voraciously, with much more interest than they read feedback from me. In the end, people love reading about themselves.
This semester, I turned over the DC Circuits course to the questions my students asked. I started because their questions were excellent. But I continued because I found ways to nurture the question-creation in ways that both introduced students to “thinking like a tech” and, not coincidentally, “covered” all the curriculum. First we needed a shared purpose: to be able to predict “what electrons are doing” in any DC circuit by the end of the semester. Next, we needed to generate questions Here are a few examples of how it happened.
Class discussion parking lot
In the first few days of the semester, we brainstormed a list of ideas, vocab, and anything else anyone remembered about magnets, then about atoms. I asked questions, introducing the standards for reasoning by asking “What exactly do you mean by … ” or “How much…” or “How could idea A and idea B be true at the same time?” Any time we reached “I don’t know” or a variety of apparently contradictory answers, I phrased it as a question and wrote it down. This turned out to be a useful facilitation technique, to be used when students were repeating their points or losing patience with a topic. I checked with the class that I had adequately captured our confusion or conflicting points, and stored them for later. Two days into the course we had questions like these:
- What does it mean for an electron to be negative? Is it the same as a magnet being negative?
- Does space have atoms?
- Can atoms touch?
- We think protons are bigger than electrons because they weigh more, but do they actually take up more volume?
I continued throughout the semester to gather their questions; I periodically published the list and asked them to choose one to investigate (or, of course, propose another one). We never ran out.
I assess their reasoning
At the beginning, I waded gradually into assessing students’ reasoning. We started with some internet research where everyone had to find three sources that contributed to our understanding of our questions; I asked them to use Cornell paper to record what their source said on the right, their own thoughts (questions, connections to their experience, visuals, whatever) on the left, and a summary at the bottom. (Later I did this with a Googledoc, but I went back to paper because of the helpfulness of sketches and formulas).
I collected these and wrote feedback about clarity, precision, logic, and the other criteria for assessing reasoning. “What do you mean by this exactly?” “How much more?” “Does the source tell you what causes this?” “Do you think these two sources are contradictory?” “Have you experienced this yourself?” I also kept track of all the questions they asked and added them to the question list. Here’s an example, showing a quote from the source (right) and the student’s thinking (left) with my comment (bottom right).
There’s a lot of material to work with here: finding parallels between magnetic and electric fields; what the concept of ground means exactly; and an inkling of the idea that current is related to time. I love this example because the student is working through technical text, exploring its implications, comparing them to his base knowledge, and finding some “weirdness.” I mean, who ever heard of a magnet getting weaker because it was lying on the ground rather than sticking to the fridge? Weirdness is high on my priority list for exploring questions.
I continued periodically to ask students to record both a published source and their thoughts/questions. There’s something about that gaping, wide blank column on the left that makes people want to write things in it.
Students respond to my assessments
The next assignment was for students to pencil in some comments responding to my comments. This got the ball rolling; they started to see what I was asking for. They also loosened up their writing a bit; there’s something about writing diagonally in the corner of a page that makes people disinclined to use 25-cent words and convoluted sentence structure. Exhibit A is the same student’s response to my question:
Ok, so this student is known for flights of fancy. But there’s even more to work with here: air as an insulator; the idea of rapid transfers of electrons when they come in contact with certain materials — as if the electrons are trying to get away from something; an opportunity to talk about metaphors and what they’re good for.
This exercise also set the stage for the idea that the comments I leave on their assignments are the beginning of a conversation, not the end, and that they should read them. Finally, it generated questions (in their responses). I was pretty broad in my interpretation of a question. If a student claimed that “It’s impossible for X to the same as Y,” and couldn’t substantiate it in response to my question, it would end up in the question list as “Can X be the same as Y?”
They assess a published source’s reasoning
The information the students found fell into four broad categories. I printed copies of the results for everyone. On the next day, students broke into four groups with each group analyzing the data for one category. They had to summarize and evaluate for clarity, precision, lack of contradictions, etc. I also asked them to keep track of what the group agreed on and what the group disagreed on. As I circulated, I encouraged them to turn their disagreements into questions.
I assess their assessments
I gave groups live feedback by typing it into a document projected at the front of the room while they were working. They quickly caught on that I was quoting them and started watching with one eye. I lifted this group feedback model wholesale from Sam Shah, except that my positive feedback was about group behaviours that contributed to good-quality reasoning (pressing each other for clarification, noticing inferences, noticing contradictions, etc.).
Rubric for Assessing Reasoning
I read their submissions and wrote back. Next day, they had to get back into their groups, consider my feedback, and decide whether to change anything. Then I asked them to present to their classmates, who would be assessing the presentations. I knew they would need some help “holding their thinking” so I made this “reading comprehension constructor” (à la Cris Tovani) based on our criteria for evaluating thinking: it’s a rubric for assessing reasoning.
If you look closely you’ll see that three new criteria have appeared, in the form of check boxes; they are criteria about the source overall, not about the quality of reasoning. Is the source reviewed by experts? Is the source recent (this forces them to begin looking for copyright dates)? Is the source relevant to our question? I asked that they carefully consider the presentations, and accept only the points that met our criteria for good quality reasoning. Each student filled out a rubric for each presentation. Rubrics were due at the end of class.
Voila: more questions.
Conclusion: Attributing authorship helps
I suspect that my recording of who asked which questions is part of what makes this work (see this post by John Burk for an interesting video of John’s colleagues discussing this idea with Brian Frank). The students know and trust that I will attribute authorship of their ideas now; it seems to make them more willing to entrust their questions to me. They’ve started saying things like “I don’t want to reject this idea from the model, I think it’s a good starting point, but next I think we need to know more about what voltage is doing exactly that causes current. Could you add that question to the list?”
Update: the most recent version of my grading policy has its own page, “How I Grade,” on a tab above.
The new assessment and reporting plan is done… for now. Here’s the status so far.
The Rubric — Pro
If you score some level-3 or level-4 questions, you don’t get credit for them until you’ve finished the level-2 skills. It doesn’t invalidate the more advanced work you’ve done; you don’t have to necessarily do it all over again — it’s sort of held in the bank, to be cashed in once the level 2 stuff is complete. It doesn’t penalize those who choose a non-linear path, but it doesn’t let basic skills slip through the cracks.
Choosing Skills — Con
Oh boy, this is definitely ridiculous. As you can see, there are way too many. It actually got worse since my first draft, peaking in version 0.6 and coming back down in the one linked above. These guidelines helped me beat it back. I’m telling myself that the level 2 skills will repeat in each topic, and that it won’t end up being 100 items in my gradebook. On the other hand, this program has 6 semesters-worth of material crammed into 4 semesters-worth of time. It is like being carpet-bombed with information. And yet, when our grads get hired, there is always more their employers wish they knew. The previous grading system didn’t create the problem; this new system will not solve it. The whole project would be frankly impossible without SBG Gradebook, so bottom-of-my-heart thanks to Shawn Cornally and anyone else involved.
Re-assessment — Pro
Re-assessment can be initiated by me (quizzes) or by the student (by showing me that they’ve done something to improve). Grades can go down as well as up. I took to heart the suggestions by many people that one day per week should be chosen for reassessment. We’re blessed with 3-hour shop periods, which is typically more time than the students need to get a lab exercise done. So shop period isn’t just for reassessing shop things any more; you can also reassess written things then too.
Synthesis — We’ll see
Some synthesis skills I consider essential, like “determine whether the meter or the scope is the best tool for a given measurement”. Those are level-3 skills, with their individual parts included as level-2 skills. That means you have to do them to pass. It also means I have to explicitly teach students not only how to use a scope and a meter, but how to “determine“. Seriously, they don’t know. Sometimes I weep in despair that it’s possible to graduate from high school, maybe even get a job, work for a few years, have a couple of kids, and still not know how to make a decision strategically. (Or at least, not be able to call on that skill while you are physically located inside a classroom). Other days I stop tilting at windmills and start teaching it, helping students recognize situations where they have already done it, and trying to convince them that in-school and everywhere-else are not alternate universes.
Other forms of synthesis are ways of demonstrating excellence but not worth failing someone over; these become level-4 or 5 skills. It still tells the student where they are strong and where they can improve. It tells me and their next-semester teachers how much synthesis they’ve done. That’s all I need.
This directly contradicts my earlier plan to let students “test out” of a skill. But, because level 2 and level 5 are now different skills, I don’t have to write 5 versions of the question for each skill. I think that brings the workload (for the students and me) back down to a reasonable level, allowing me to reassess throughout the term. The quizzes are so cumulative that I don’t think an exam would add anything to the information.
Retention — Too soon to tell
It’s important to me to know how you’re doing today, not last month. That means I reserve the right to reassess things any time, and your score could very well go down. This is bound up with the structure of the course: AC Circuits has 6 units, each of which builds directly on the previous one (unlike a science class where, for example, unit 1 might be atoms and unit 2 might be aardvarks). Con: a missed skill back in unit 1 will mess you up over and over. Pro: provides lots of practise and opportunities to work the same skill from different angles. With luck, Unit 5 will give you some insight on Unit 2 and allow you to go back and fix it up if needed.
Feedback — Pro, I think
This will be tough, because there’s not enough time. The concepts in these courses are complex and take a long time to explain well. The textbook is a good reference for looking up things you already know but not much good at explaining things you don’t know. That means I talk a lot in class. At best, I get the students participating in conversations or activities or musical re-enactments (don’t laugh — “Walk like… an ee-lec-tron” is one of my better lesson plans) but it leaves precious little time for practice problems. I’ll try to assign a couple of problems per night so we can talk about them in class without necessarily doing them in class.
I’ve also folded in extra feedback to this “weekly portfolio” approach I stole from Jason Buell. Each student has a double-pocket folder for their list of topic skills. There are a couple of pieces of looseleaf in the brads too. When they’ve got something they either want feedback on (maybe some especially-troublesome practice problems that we didn’t have time to review in class) or that they want to submit, they can write a note on the looseleaf, slide the documentation into the pocket, and leave it in my mailbox. I either do or do not agree that it sufficiently demonstrates skills X, Y, and Z, and write them back. We did a bit of this with a work-record book last semester, and the conversations we had in writing were pretty cool. I’m looking forward to the “message-board” as our conversation goes back and forth. I hope to keep the same folders next year, so we can refer back to old conversations.
It’s official: I submitted the new plan for feedback and grading.
Since V0.1, I have gone through about 7 more versions, experimenting with a dizzying array of variables. My final result is pretty different from my original thoughts, but I think I’ve struck a balance I’m happy with.
I chose a “topic-oriented” scheme ripped off in large part from Always Formative, because it clarifies which skills go together. I also used his scoring system: you can’t get a 3 until you’ve completed all the level 2 stuff (although I switched to a 5-point system, for reasons that have not changed since this post). I like this setup because it suggests an order in which to tackle things. At the same time, it doesn’t prevent you from attempting harder problems, or recognizing that you’ve already done harder problems. So, it has scaffolding and flexibility at the same time.
Here’s what the tracking sheet looks like for the topic called AC measurement (a first-year course). Note that the students get the first two pages; the third page is a bank of level-5 questions that I may use if students ask for them.
Skills Are a Yes-Or-No Question
I also like the “yes or no” approach to the skills. Each skill is not graded on a rubric; you’ve either demonstrated the skill or you have not. I think this will grades feel less like a “moving target” to the students, cutting down on time-wasters like “I got that question mostly right so I should get 4/5 instead of 3/5.” Now, that question is a skill. You either demonstrated that you have the skill, in which case you get a YES; or you did not, in which case you try again.
Finally, I chose this system because it is similar to the frankly excellent system that was set up by my predecessor. When I started a year ago, the students were already working at their own pace, each on their own project, demonstrating when they were ready, and re-demonstrating if they weren’t satisfied with the first time… but only in the shop. I’m looking forward to extending those benefits to our classroom time.
I stole some ideas from this post from Questions on rubrics. The comments are really rich too — worth reading.
|Skills for Learning||1.||Find the main ideas and their relationships in a textbook section|
|2.||Show the relationship between written descriptions, electrical calculations, schematic symbols, and graphs|
|Background||3.||Use algebra to prove a point|
|4.||Use technical writing to prove a point|
|6.||Use decibels to prove a point|
|7.||Use Multi-Sim to check your work|
|8.||What happens when you use superposition with AC?|
|Meter||9.||Interpret a multimeter DC measurement|
|10.||Interpret a multimeter AC measurement|
|Oscilloscope||11.||Set up an oscilloscope|
|12.||Measure frequency on an oscilloscope|
|13.||Interpret an AC-coupled oscilloscope measurement|
|14.||Interpret a DC-coupled oscilloscope measurement|
|15.||Use two scope probes at the same time|
|16.||Use the “difference” function on the scope|
|Meas’ment||17.||Interpret an electrical measurement taken at one point in time|
|18.||Interpret an electrical measurement taken across a span of time|
|Inductors||19.||Is it safe to use this inductor in this circuit?|
|20.||Is this inductor damaged?|
|21.||What will happen in a DC inductor circuit?|
|22.||What will happen in an AC inductor circuit?|
|23.||What will happen in a DC resistor-inductor (RL) circuit?|
|24.||What will happen in an AC resistor-inductor (RL) circuit?|
|Transformers||25.||Is it safe to use this transformer in this circuit?|
|26.||Is this transformer damaged?|
|27.||What will happen in a DC transformer circuit?|
|28.||What will happen in an AC transformer circuit?|
|Capacitors||29.||Is it safe to use this capacitor in this circuit?|
|30.||Is this capacitor damaged?|
|31.||What will happen in a DC capacitor circuit?|
|32.||What will happen in an AC capacitor circuit?|
|33.||What will happen in a DC resistor-capacitor (RC) circuit?|
|34.||What will happen in an AC resistor-capacitor (RC) circuit?|
|Suprpos||35.||What will happen in an RC or RL circuit if we use AC and DC at the same time?
|Trouble-hooting||36.||Take measurements to prove or disprove your predictions about a DC circuit|
|37.||Take measurements to prove or disprove your predictions about an AC circuit|
|38.||Use your measurements and predictions to fix a circuit problem|
|Filters||39.||Can a circuit act as a filter? If so which kind?|
|40.||Find the critical frequencies of a circuit|
|41.||Graph a Bode plot|
|42.||Does a circuit need a filter? What kind or why not?|
|RLC||43.||What will happen in a DC resistor-inductor-capacitor (RLC) circuit?|
|44.||What will happen in an AC resistor-inductor-capacitor (RLC) circuit?|
|Resonance||45.||What is a parallel resonant circuit used for?|
|46.||What is a series resonant circuit used for?|
|47.||Find the resonant frequency of a circuit|
|48.||What is resonance?|
|1||Understands something about the idea||Starting|
|2||Understands everything about the idea except for that one conceptual error||Hm|
|3||Understands the idea, has trouble applying it||Almost|
|4||Understands and can apply the idea to a familiar problem||Good|
|5||Understands and can apply the idea to a new kind of problem, or combine with other skills in a new way||Wow!|
Circuit Prediction Check
Do you know what happens to
I finished my first attempt at a skills list.
It’s way too long. There are 48 skills on it. Unfortunately, the curriculum actually requires that all of them be crammed into the semester. It’s a 15-week course, 60 hours. Not quite one skill per hour. It’s also light on troubleshooting. I’m trying to build in the skills that troubleshooting requires. They will inevitably troubleshoot. I’ll help them. I’ll give them pointers. I’ll host discussions and strategy sessions in class. But I might not grade them on it. Assessing troubleshooting (more than the once-over included here) might have to go in the Semiconductor Circuits course.
Initial plan for final grade:
100: sure, if you have 5′s on everything!
80-99: no score below 4 (average the scores)
60-79: no score below 3 (average the scores, cap at 79)
Yes, this means that I’m willing to fail someone if they don’t get the point about even one of these. I hope I’m making the right decision here. Shortening the list of skills will probably help.
You must show some evidence of preparation before to assess. I don’t care what it is but I want proof of some kind (data you experimented with, practise problems you completed, etc.) This is about helping students take control of the cause-and-effect between their work and their score (as opposed to my work and their score). It also should give us some data about how effective study strategy X is for student Y.
Since I’ll have to tell the class that a quiz is coming up (probably a week in advance) so that they can prepare some evidence that they are ready, that means no pop quizzes. I’m ok with that.
A lot of the skills have to be demonstrated in the shop. Ideal scenario: I pick 2-3 skills to assess. At the beginning of our 3-hour shop period (or, if I’m really organized, a few days before), I announce which skills I will be assessing. They have 3 hours to practise, and can let me know when they’re ready to demonstrate. The lab book will be a good source of circuits to practise on, but this system means I will no longer require them to complete the lab exercise as it is written. If they want to branch out and create their own experiment, I figure that’s a win. If they do half the lab and their skills are up to scratch, why force them to do the rest? If they need the rest for some other skill, they can always come back. Or get started on it in the remaining time. I suspect that they will try to make up their own experiments, realize it’s harder than it looks, and go back to following the lab book. That might be ok, since they’re doing it with a clear target in mind (today I need to prove that I can use two scope probes at the same time). At the same time, the students who are bored can have that extra challenge, and maybe score a 5 in the process.
In the past, many students have stumbled through the labs like zombies, skipping the explanations, the purpose, all that other direction-finding stuff. They get to the end and have no idea what the point was. Then then complain that the lab book is badly written. *laugh* And, well, it is. But if a clear skill-target can wake them up and get them doing this work with purpose, it’ll be an improvement. Especially since the circuits are so trivial, it’s often hard for the students to see the point. “Why bother hooking up two resistors in series?” Or, my absolute favourite — the vocational equivalent of “when in my life will I ever need this” is “when in industry will I ever need to hook up two resistors in series??” Ah, but the point of the lab wasn’t the resistors. It was the multimeter you used, and the process of testing your predictions. And yes, you will need those in industry…
Things that will get confusing: if they are practising their measurement skills and need my help, I am basically tutoring them. I would prefer not to tutor and assess the same skill in a single day, but I don’t want to discourage either one, so I guess I have to suck it up. I don’t want to make them afraid to admit they don’t understand something. Saying they have to assess another day if they got any help from me would just discourage them from asking for help when they need it. Last year I had them do the lab, then evaluated them by asking them what the point was. That worked out ok, gave what seemed like meaningful data, so I guess this is no worse. And that part of the course is much less of a problem than the abstract skills and the metacognition. One addition: I will try to make my help a little more “expensive” by requiring that they document their troubleshooting before I help them. DOCUMENT. YOUR. TROUBLESHOOTING. Amazing how troublesome those three words can be! No, I do not mean “just tell me what you did, no need to write it down”. No, I do not mean what your buddy at the next bench did when he looked at it. No, I do not mean your hunch that the transistor is blown, so you chucked it and put a new one in, without measuring anything or testing the transistor. Argh. Maybe start with requiring that they have one documented troubleshooting attempt, then gradually increase the number throughout the semester.
Without further ado, skills list is in the next post.
New grading strategy, part I: a grading rubric.
I was going to tackle the list of skills first. In fact I did. Then I found myself waffling over how much info makes up a skill. If the pieces are too small, there will be a thousand of them to grade. If they’re too big, it will be impossible to summarize them in a single grade. I kept thinking about whether it was assessable — which meant knowing how I would assess.
So I decided to try the rubric first. Here’s a bit of blogosphere roundup:
|Max Score||To get full marks:|
|Continuous Everywhere but Differentiable Nowhere||4||Demonstrate the skill
Use algebra correctly
Use correct notation
|Teaching Statistics||4||Demonstrate skill
Solve a complex problem
Solve independently (lower scores for solving with assistance
|Brain Open Now||4||Demonstrate the skill
Use algebra correctly
Use correct notation
Draw valid conclusions
|MeTA Musings||4||Demonstrate full understanding|
|Sarcasymptote||5||Demonstrate comprehensive knowledge
Use it in novel situations
|Point of Inflection||4||Demonstrate mastery
Connect to other skills
A lot of rubrics differentiate the score based on algebra (3 if you get part of the idea but make conceptual errors; 4 if you get the idea but make algebra errors). This makes sense in a math class, but since I’ll be testing things like “can predict circuit behaviour”, I’m tempted to make algebra a separate standard. If you make an algebra mistake that’s serious enough to draw wrong conclusions from, it’s a conceptual error. But a lot of algebra mistakes have such small effects on the goal that they don’t change your predictions or problem-solving strategy. There’s a whole other course that tests algebra, so maybe it’s wasteful to make it the main criterion in my grading scheme. I think writing is also a separate standard. There will be chances to assess the students’ ability to integrate their own writing with their own measurements/troubleshooting as part of project work (which I think I’ll grade on a separate rubric).
It’s possible to score based on how much assistance someone needs, but I’ll be assessing only in situations where assistance is not available. “Demonstrate full understanding” and other similar language like “proficient” is not specific enough for me — I worry that students would have no definitive way of knowing what I consider proficient, short of asking me a million times a week (causes anxiety for them; causes insanity for me).
The last two, though, seem reasonable to me. What I’m really testing — if I had to pick one thing — is ability to apply the knowledge to a situation that’s not exactly like anything in the textbook (though it might be a mix-and-match combination of previous problems). This is also the hardest thing for students to understand. I get a lot of pushback from students who are angry that I’m testing them on things I “haven’t taught” them — this despite practise problems, in-class time Q&A or group work, and hands-on activities practising the skills from various angles. There’s a high correlation between that attitude and the Fs that came through on the last test. So I’d like clarify that, in fact, that is what I teach… and what I expect them to aim for.
I’m tempted to use a 5 point system. Someone wrote recently that they didn’t like 5 because it gave a “middle ground” score of 3 that was neither good nor bad. I disagree: there is always a score of 0. So it’s the 4-point scale that has a fence-sitting score. In the final weighting, 2/4 will probably translate into 50%, which is the cutoff grade for supplemental exams. That’s definitely something I want to avoid, both for my sanity and the students’. They will want to know which side of passing (60%) they’re on: a 3 means yes, a 2 means “not close”. So I think my synthesis so far is this:
- understands something about the concept
- understands lots of things about the concept but not everything
- gets the concept, has problems with strategy or application of skill
- applies the skill to a previously practised situation
- chooses appropriate concepts to combine with and/or applies to novel situations
The wording needs some work, but basically breaks down the way I want it to. Assuming I’ll average these for the final grade, a 4 is 80% (this is technically considered “honours” by the school, so I might have to make this worth 79% or something like that). To get a 5 you’d have to do something cool. If you did something cool and it mostly worked, I’d be ok giving a 4.5. Some students will probably be confused about what the “appropriate” other skills are for getting a 5, but I think I can draw up some guidelines for that. On the other end of the scale, if you don’t have a good grip on what the point is, you are below 60%. Seems sensible so far.
Next up: some really interesting ideas I’ve stumbled across for evaluating rubrics. I’ll put this one through the ringer and see what survives.