You are currently browsing the category archive for the ‘Lesson planning’ category.
I’d love to get rid of my lab books. They are the standard, canned variety: the instructions are overly helpful and they ask “known-answer questions.” But I just couldn’t overhaul them this semester — my free time is between 2AM and 7AM.
I did find 2 quick fixes, though, that made canned labs less bad.
1: Assigning the purpose, not the title.
No, seriously — it made a difference when I stopped telling students to “Do Lab 31”, and wrote the purpose of the lab on the skills sheet. The skill is now “predict the results of a low-pass frequency sweep. Build a circuit to test your predictions.” Which is basically what Lab 31 is about. (I use a consistent wording, which probably helps too. “Predict (something). Build a circuit to test your predictions.”)
Some students thumb through the lab book until they find one that suits their purpose. Some students just make something up. In both cases, they now know what they’re looking for.
It gives them some choice about the level of guidance they want. It gives them a backup plan if they get frustrated and don’t know what to do next. They can use the lab procedure as a recipe, or they can use it for inspiration. Having that control seems to improve their ability to assess whether they have met the requirement (“test your predictions”). It also improves their reading comprehension of the lab book (having a purpose makes things make sense… thanks Cris Tovani!). If that’s all they needed, why didn’t they read the purpose that’s printed in the lab? I dunno. (Ok, the purpose is often badly written and buries the lead).
Anyway, even those who use the lab procedure word-for-word are now choosing which words to follow. What I mean is, they evaluate each step in the lab procedure to find out if it’s necessary to meet my requirements. I say again — they evaluate each step in the lab procedure.
If they decide that they can demonstrate the skill I asked for without doing step 14, then they figure they’ve pulled one over on me. I never thought I’d be so delighted to see them game the system.
2: Measuring how wrong the theory is, not how right.
“When things are acting funny, measure the amount of funny.” (Bob Pease, National Semiconductor) .
Now that’s a lab purpose I can get behind: find the funny, and measure it.
What if the goal of the inquiry was not to find the right answer (which after all is already known) but to find out how wrong the right answer is? In other words, let’s discover the extent to which the theory fails. This is several kinds of useful: the result is no longer known, and it gives you a gut feeling for how much your experimental data should reasonably diverge from predictions. It lets the students evaluate the model, instead of being evaluated by it. It also raises the question, “what else is going on that we don’t know about”?
Yesterday we had a lab on band-pass filters. About half of the class discovered the parasitic capacitance of inductors. They were excited about this. I can’t help thinking that this happened partly because the goal was to test their predictions — not to match them. (Also, because I don’t require them to fill in the pre-printed table in the lab report, they increased the frequency until the signal generator topped out — just to see what would happen).
Yes, it’s important that they understand measurement error, and assess their lab technique against some known results. But my students often interpret “sources of error” as a shameful failure, which like other shameful failures should be hidden and/or lied about. I hope I’m not mangling experimental philosophy by challenging students to develop more sophisticated predictions that take into account the effects of common sources of error. You test the theory. Don’t let it test you. If your data doesn’t come up how you predicted, that means the prediction is wrong. Is your model too simple? Are you measuring something other than what you thought? In either case, answer those questions. Stop trying to make reality match theory. Reality is not wrong. Reality is real.
“The most exciting phrase to hear in science, the one that heralds new discoveries, is not Eureka! (I found it!) but rather, ‘hmm… that’s funny…'” (Isaac Asimov, probably apocryphal)
No cool new discoveries will be made as long as “funny” means “wrong.”
My plan for this semester was not to do battle with the four horsemen of the curricular apocalypse (Time, Textbooks, Tradition, and Tests). I knew they were out there, but I was ignoring them. I was going to create a smaller, simpler project for myself. One that would result in a sensible amount of sleep and possibly even the occasional pretense of a social life. I vowed that I would tackle only the grading piece of the eschatological pie, changing to what I call a “skills-based” scheme.
Now it’s a month into the semester. We’ve barely cracked the textbook or the lab book. My lesson plans have radically changed. Time-management has radically changed, for me and the students. And tests… well, they’re smaller, lower-stakes, and can often be replaced or supplemented by shop demonstrations. I didn’t mean to do it. But the changes in the grading scheme started a snowball that changed lots of other things too.
Textbooks (Or Lack Thereof)
I created a list of skills that students had to demonstrate to complete a topic unit. That meant I had to think hard about what skills are actually indispensable. That in turn made me think hard about why I teach what I teach, and why the textbook includes what it includes. I asked myself lots of questions like “Why do they need this skill? When will they need this skills? In what context will they use it?” I ended up being much more focused on our goals. Last year I questioned whether the textbook treatment had too much depth, or too little, or on the wrong things. This year I was able to start answering those questions. Now that I have more information, I can’t bring myself to not use it. That means the textbook and lab book are more like dictionaries and less like instruction manuals.
Tradition: Lesson Plans
Once I realized that the textbook didn’t lead where I wanted to go, I had to develop some lesson plans in a hurry. This rubric for application problems helped a lot. Developed by Dan Meyer for math classes, it helps students find the meaning behind the math, and connect it to what they know about the real world.
Since I’m especially concerned with synthesis and problem-solving, I’m looking for ways to help students find meaning in links between ideas. Kate Nowak’s guidelines were the best, most concrete suggestions I found.
Time and Tests
Well, you can retry a test question any Wednesday afternoon. Or, you can show your mastery of that skill by building a circuit, if you prefer — either during shop period or in open shop time on Tuesdays. This has opened up lots of interesting conversations. For one, many students have discovered gaps in their fundamental skills that neither they, nor I, suspected. A second-year student blurted out in class last week, “Is the cause on a graph always on the x-axis?!”
Having some very basic questions on the test has helped me figure out how to coach them. Some students who have never approached me for extra help are talking to me after class about why they didn’t get credit for something. Theory: if you get a small, simple question wrong, you can ask the teacher a small, simple question. If you get a big complicated problem wrong, it seems futile or maybe impossible to even figure out what question to ask. The easy questions at the beginning of the test also reduce test anxiety, I think (can’t prove this).
In order to get 100% for a unit, students must complete a more in-depth problem, develop their own problem-solving strategy, and combine two or more topics. I throw one of these questions on each test. They aren’t necessarily difficult — just unfamiliar applications of familiar skills. But they’ve become a great way to introduce a new topic. On each week’s quiz, the “Level 5” question is a simple problem from the next chapter. Result: most of the class is attempting problems that I haven’t explicitly taught yet. Even if they don’t get the right answer, the process helps them clarify their assumptions. At the end of the quiz, they’re dying to know how it works. This leads to some of our best conversations.
The students hand in an answer sheet at the end of the quiz, which I later use to data enter their scores. They keep the quiz paper, which (if they followed instructions) has all their calculations, sketches, etc. Then we immediately grade the quiz as a class. Ideally, they know instantly what they got right and what they need to work on. Realistically, they hate writing comments on their quiz papers, so they quickly forget which ones are right and which are wrong, or why they’re wrong. (Why? Is it because it forces them to face that they made a mistake?) Then, they can’t tell what they need to reassess. So, for the last test, I asked them to pass their quiz papers in to me so I could see the feedback they are writing to themselves, and write back to them about it. I was dismayed to see how many students, when forced to actually grade their papers, wrote incredibly negative comments to themselves (“Don’t rush you moron!” or “Stupid stupid stupid!”). Wow. Good for me to know, but I’m not sure how to address this, other than to write back with a comment that I won’t stand for my students being insulted in my class — not even by their past selves.
About half of my class has a hard time seeing the connections between different ideas (the rest of the class is bored to tears if we spend any time on it). It’s been hard to figure out how to handle this. But some interesting results have surfaced this month. Whether they’re due to the changes in the grading scheme etc., we’ll never know. Example: my colleague is introducing filter circuits in a very different context than the one in which I teach them. Most students don’t even recognize that it’s the same circuit at first. He had barely put the circuit on the board when a student announced, “Isn’t that just a low-pass filter?” Another student created a circuit that demonstrates time-constant switching — foreshadowing next week’s topic. Then there was the who student thought they had found a sneaky loophole in my new grading scheme. “Can I use a buffer circuit from Digital class to demonstrate that I understand op-amp gain for Solid State class?” I refrained from weeping from joy or jumping up and down. “I suppose,” I agreed.
Skills-Based Grading: Transformative learning or edu-fad?
A number of people have written about the idea that changing a grading system does not magically improve learning or teaching. That’s true. But I think it’s also true that redesigning a grading scheme while focusing on skills (or “standards” or “outcomes” or whatever they’re called) provides a lot of information that can be used to improve learning, or at least to find out where the problem areas are. For me, at least, the more of that information I had, the less I was able to continue doing what I had always done.