You are currently browsing the category archive for the ‘Inquiry’ category.
When we start investigating a new topic or component, I often ask students to make inferences or ask questions by applying our existing model to the new idea. For example, after introducing an inductor as a length of coiled wire and taking some measurements, I expect students to infer that the inductor has very little voltage across it because wires typically have low resistance. However, for every new topic, some students will assume that their current knowledge doesn’t relate to the new idea at all. Although the model is full of ideas about voltage and current and resistance and wires, “the model doesn’t have anything in it about inductors.”
There are a few catchphrases that damage my calm, and this is one of them. I was discussing it with my partner’s daughter, who’s a senior in high school, and often able to provide insight into my students’ thinking. I was complaining that students seem to treat the model (of circuit behaviour knowledge we’ve acquired so far) like their baby, fiercely defending it against all “threats,” and that I was trying to convince them to have some distance, to allow for the possibility that we might have to change the model based on new information, and not to take it so personally. She had a better idea: that they should indeed continue to treat the model like a baby — a baby who will grow and change and isn’t achieving its maximum potential with helicopter parents hovering around preventing it from trying anything new.
The next time I heard the offending phrase, I was ready with “How do you expect a baby model to grow up into a big strong model, unless you feed it lots of nutritious new experiences?“
It worked. The students laughed and relaxed a bit. They also started extending their existing knowledge. And I relaxed too — secure in the knowledge that I was ready for the next opportunity to talk about “growth mindset for the model.”
This year I’ve really struggled to get conversation going in class. I needed some new ways to kick-start the questioning, counter-example-ing, restating, and exploring implications that fuel inquiry-based science. I suspected students were silent because they were afraid that their peers and/or I would find out what they didn’t know. I needed a more anonymous way for them to ask questions and offer up ideas.
About that time, I read Mark Guzdial’s post about Peer Instruction in Computer Science. While exploring the resources he recommends, I found this compelling and very short PI teacher cheat sheet. I was already curious because Andy Rundquist and Joss Ives were blogging about interesting ways to use PI, even with small groups. I hadn’t looked into it because, until this year, I’ve never been so unsuccessful in fostering discussion.
The cheat-sheet’s clarity and my desperation to increase in-class participation made me think about it differently. I realized I could adapt some of the techniques, and it worked — I’ve had a several-hundred-percent increase in students asking questions, proposing ideas, and taking part in scientific discourse among themselves. Caveat: what I’m doing does not follow the research model proposed by PI’s proponents. It just steals some of their most-easily adopted ideas.
What is Peer Instruction (PI)?
If you’re not familiar with it, the basic idea is that students get the “lecture” before class (via readings, screencasts, etc), then spend class time voting on questions, discussing in small groups, and voting again as their understanding changes. Wikipedia has a reasonably clear and concise entry on PI, explaining the relationship between Peer Instruction, the “flipped classroom”, and Just-In-Time Teaching.
Why It’s Not Exactly PI
- I don’t have clickers, and don’t have any desire for them. If needed, I use home-made voting cards instead. Andy explains how effective that can be.
- I prefer to use open-ended problems, sometimes even problems the students can’t solve with their current knowledge, rather than multiple-choice questions. That’s partly because I don’t have time to craft good-quality MC items, partly because I want to make full use of the freedom I have to follow students’ noses about what questions and potential answers are worth investigating.
- Update (Feb 19): I almost forgot to mention, my classroom is not flipped. In other words, I don’t rely on before-class readings, screencasts, etc.
What About It is PI-Like?
- I start with a question for students to tackle individually. Instead of multiple-choice, it could be a circuit to analyze, or I might ask them to propose a possible cause for a phenomenon we’ve observed.
- I give a limited amount of time for this (maybe 2-3 minutes), and will cut it even shorter if 80% of students finish before the maximum time.
- I monitor the answers students come up with individually. Sometimes I ask for a vote using the flashcards. Other times I just circulate and look at their papers.
- I don’t discuss the answers at that point. I give them a consistent prompt: “In a moment, not right now but in a moment, you’re going to discuss in groups of 4. Come to agreement on whatever you can, and formulate questions about whatever you can’t agree on. You have X minutes. Go.”
- I circulate and listen to conversations, so I can prepare for the kinds of group discussion, direct instruction, or extension questions that might be helpful.
- When we’re 30 seconds from the end, or when the conversation starts to die down, I announce “30 more seconds to agree or come up with questions.”
- Then, I ask each group to report back. Usually I collect all the questions first, so that Group B doesn’t feel silenced if their question is answered by Group A’s consensus. Occasionally I ask for a flashcard vote at this point; more often, collect answers from each group verbally. I write them on the board — roughly fulfilling the function of “showing the graph” of the clicker results.
- If the answers are consistent across the group and nothing needs to be clarified, I might move on to an extension question. If something does need clarification, I might do some direct instruction. Either way, I encourage students to engage with the whole group at this point.
Then we’re ready to move on — maybe with another round, maybe with an extension question (the cheat-sheet gives some good multi-purpose prompts, like “What question would make Alternate Answer correct?”). I’m also a fan of “why would a reasonable person give Alternate Answer?”
Why I Like It
It doesn’t require a ton of preparation. I usually plan the questions I’ll use (sometimes based on their pre-class reading which, in my world, actually in-class reading…). But, anytime during class that I feel like throwing a question out to the group, I can do this off the cuff if I need to.
During the group discussion phase (Step 4), questions and ideas start flowing and scientific discourse flourishes. Right in this moment, they’re dying to know what their neighbour got, and enjoy trying to convince each other. I don’t think I buy the idea that these techniques help because students learn better from each other — frankly, they’re at least as likely to pseudoteach each other as I am. I suspect that the benefit comes not so much from what they hear from others but from what they formulate for themselves. I wish students felt comfortable calling that stuff out in a whole group discussion (with 17 of us in the room, it can be done), but they don’t. So. I go with what works.
No one outside the small group has to know who asked which questions. The complete anonymity of clickers isn’t preserved, but that doesn’t seem to be a problem so far.
Notes For Improvement
There are some prompts on the cheat sheet that I could be using a lot more often — especially replacing “What questions do you have” or “What did you agree on” with “What did you group talk about,” or “If your group changed its mind, what did you discuss?”
There’s also a helpful “Things Not To Do (that seemed like a good idea at the time)” page that includes my favourite blooper — continuing to talk about the problem after I’ve posed the question.
If I was to add something to the “What Not To Do” list, it would be “Shifting/pacing while asking the question and immediately afterwards.” I really need to practice holding still while giving students a task, and then continuing to hold still until they start the task. My pacing distracts them and slows down how quickly they shift attention to their task; and if I start wandering the room immediately, it creates the impression that they don’t have to start working until I get near enough to see their paper.
Overheard while the students discussed the difference between I vs. V characteristics of light bulbs and diodes.
Facilitating the process:
What else do we know?
Are we going to analyze predictions and measurements? Or just measurements?
So forward voltage is one category, reverse is another?
So, what have we concluded so far?
Do we have to write down our data?
I’m going to keep writing down the data.
So basically what you had was…
Were you maybe reading it like…
So what should we put here?
But it wouldn’t be through the LED. The voltmeter was shorting out the LED.
So they’re about the same, what’s the reason for that?
Holding our thinking to the model:
So this is actually supporting our idea…
One thing I noticed was that as voltage increased, current increased
I thought it always had all the voltage right there.
The current is supposed to go up, according to predictions.
Was VR1 always 0?
So forward voltage is one category, reverse is another?
Do you have the same figures for positive and negative voltage? [Reply] Well, let’s compare.
So they’re about the same, what’s the reason for that?
I think there’s something wrong there.
So we can’t compare these to each other.
What I did was use Ohm’s Law, that you have to do that for each point individually.
I think the resistance will decrease because…
Diodes are crazy!
It probably works like a switch.
The past semester has been a tough slog with my first-year class. I’m slowly figuring out what resources and approaches were missing. Last year, I launched myself headfirst (and underprepared) into inquiry-based learning because most of the class members were overflowing with significant, relevant questions.
This year, the students are barely asking questions at all, and when they do, the questions are not very relevant — they don’t help us move forward toward predicting circuit behaviour, troubleshooting, or any of the other expressed goals we’ve discussed as a class. They’re mostly about electrical safety which, don’t get me wrong, is important, but talking about how people do and don’t get electrocuted has limited value in helping us understand amplifiers. I felt like I juiced those questions as much as I could, but it only led to more questions about house wiring and car chassis.
If I’m serious about inquiry-based learning, I have to develop a set of tools that allow me to adapt to the group. Right now I feel like my approach only works if the group is already fairly skills at distinguishing between what we have evidence for and what we just feel like we’ve heard before, and asking significant questions that move toward a specific goal. In other words, I wasn’t teaching them to reason scientifically, I was filtering out those who already knew from those who didn’t. Here are some of the things I need to be more prepared for.
I have never had so much trouble getting students to use their meters correctly. Here we are in second semester, and I still have students confidently using incorrect settings. I’d be happier if they were unsure, or had questions, but no, many are not noticing that they have problems with this. And I don’t mean being confused about whether you should measure 1.5V on the 20V or the 2000 mV setting… I mean measuring 0.1 Ohms on the 200 KOhm setting.
I switched this year to teaching them about current first, rather than resistance (like I did last year). I’m loathe to reconsider because current is the only one that lends itself to causal thinking and sense-making early in the year (try explaining resistance to someone who doesn’t know what current is… and “electric potential,” to someone who doesn’t know anything formal about energy or force or fields, is just hell). Could this be part of why they’re struggling so much to use their meters correctly? Is there something about the “current first” approach that bogs them down with cognitive load at a stage when they just need some repetitive practice? I’m curious to check out the CASTLE curriculum, maybe over the summer, to try to figure some of this out.
I created a circuit-recording template last fall that I thought was such a great idea… it had a checklist at the top to help the students notice if they’d forgotten anything. Guess what? They started measuring without thinking about the meaning of the measurements — measuring as if it was just something to be check off a list! No observations. No questions. No surprise at unusual or unintuitive numbers. Damn. The checklist is gone and never coming back — next year I’ll make sure we only measure things that the students have found a reason to measure.
Last term, I waited far too long to give the quiz on measurement technique. I knew they weren’t ready, and I kept thinking that if we spent more time practicing measuring (while exploring the questions we had painstakingly eked out), that it would get better. Finally, we were so far behind that I gave the quiz anyway. The entire class failed it (not a catastrophe, given the reassessment policy), and the most common comment when we reviewed the quiz was “why didn’t you tell us this before??” Uh. Right. Quiz early, quiz often.
Guess what the teacher wants
The degree of “teacher-pleasing” being attempted is disheartening. Students are almost always uncomfortable making mistakes, using the word “maybe” in situations where it is genuinely the most accurate way to express the strength of our data, or re-evaluating what they think of as “facts.” But this is unusual. There’s a high rate of students anxiously making up preposterous answers rather than saying “I don’t know.”
I tend toward a pretty aggressive questioning style — the kind of “what causes that, why does that happen” bluntness I would use with colleagues to bat ideas around. I’ve changed my verbal prompt to “what might cause that?” and “what could possibly be happening” in the hopes that it would help students discern whether they are certain or not, and also help them transition toward communicating the tentativeness of ideas for which we have little evidence. Obviously, I take care to draw out the reasoning and evidence in support of ideas, regardless of whether they’re canonical or not, and conversely make sure we discuss evidence against all of our ideas, including the “right” ones. I try to honour students’ questions by tracking them and letting them choose from among the class’s questions when deciding what to investigate next. But valuing their questions and thinking is clearly not enough.
I gave a test question last semester that asked students to evaluate some “student” reasoning. It used the word “maybe” in a completely appropriate way, and that’s what I heard outraged responses about from half the class. They thought the reasoning was poor (and also reported that it was badly written!) because of it. Again, we practiced explicitly, but sometimes I feel like I’m undermining their faith in “right answer” reasoning without helping them replace it with something better…
On the odd occasion when I ask someone a question and they say “I don’t know,” I make a point of not putting them on the spot, but of gathering info/evidence/ideas from other students for the first student to choose from, or breaking the class into small groups and asking them to discuss. I try to make sure that the person who said “I don’t know” has as few negative consequences as possible. Yet the person who says it inevitably looks crestfallen.
Talking in class
The frequency of students speaking up in class is at an all-time low. I wonder if this has been influenced by my random cold-calling — they figure I’ll call on them eventually so there’s no sense putting their hand up to make a comment or ask a question? The thing is, they don’t ask those questions when I call on them — just answer the question I ask.
At the same time, the frequency of whispered side conversations is at an all-time high, whether the speaker with the floor is me or another student. I think I’m unusually sensitive to this — I find it completely distracting, and can barely maintain my train of thought if students are whispering to each other. Maybe that’s partly my hearing, which is fairly acute — I can actually hear their whole conversation, even if they’re whispering at the back of the room (keep in mind that there are only 17 people and the room is pretty small). So my standard response to this is one warning during class (followed by a quiet, private conversation after class) — if it happens again, they’re leaving the room. Is this part of why they’re afraid to talk out loud — because I crack down on the talking under their breath? I’m open to other ways of responding but out of ideas at the moment.
Even the strongest students are still having trouble explaining causes of physical effects. They know I won’t accept a formula as a cause, but they can’t explain why, and when I ask someone to explain a cause, they will consistently give a formula anyway (figuring that an answer is always better than no answer, I guess). Next approaches: asking them to write down the cause, discuss in groups
As Jason articulates clearly, I think that my students need more help motivating and strengthening their scientific discourse. He summarizes a promising-sounding approach called Guided Reciprocal Questioning as follows:
- Learn about something.
- Provide generic question frames.
- Students generate questions individually.
- Students discuss the questions in their groups.
- Share out.
I do something similar to #1-3, but I’m ready to try #4-5, with appropriate “discussion frames”, to see if I can help the students hold each other accountable to their knowledge. Right now, they barely propose questions or answers, but when they do, the class seems to accept it, even if it contradicts something else we just talked about.
Also, Janet Abercrombie wrote recently in the comments about a Question Formulation Technique that I’d like to look into some more.
Conclusion: It works anyway
The whole experience was kind of heart-breaking. But the conversations with students kept convincing me that I had to do it anyway. I don’t know how many students took the time to say to me, “whoa, it seems like you actually want us to understand this stuff.” The look of astonishment really said it all. The bottom line is, this group is a much better test of the robustness of my methods than last year’s group could be.
Last year, I accidentally fell into an inquiry-driven style of teaching. This year, I set out to do it on purpose. Like Brian Frank’s example of students who do worse on motion problems after learning kinematics equations, my performance went down. Unlike in that example, though, inquiry is a sense-making tool for the teacher, not just the students, so I’m doing more sense-making, not less. The upshot: my awareness has increased while my performance has decreased. (The proper spelling is A-N-X-I-E-T-Y).
Things that improved
I added a unit called “Thinking Like a Technician,” where students practice summarizing, clarifying, and identifying physical causes, using everyday examples. When we got to atomic theory, they were less freaked out by the kind of sense-making I was asking them to do.
I started using a whiteboard template, based on Jason’s writing about Claim-Evidence Reasoning. Like Jason, I introduced it to students as “Evidence-Claim-Reasoning.” The increased organization of whiteboards makes things flow more smoothly for whiteboard authors when the discussion happens a few days after the analysis. The standard layout lowers the cognitive load for students in the audience, since they know what to expect and look for.
The major tactical error I made
Last year I started with magnets and right away focused on students’ ideas about how atoms cause magnetic phenomena. That means that our first area of inquiry was atoms. This year, I thought I was being smart and started by digging into what students wondered about electricity. BIG MISTAKE. Students wonder a lot about electricity — mostly about how you can get electrocuted, or how to give someone else a shock. It was fascinating reading for me, but they have absolutely no tools for making sense of the answers to their questions. The conventional wisdom about “electrons always seek ground” and “electricity always takes the path of least resistance” doesn’t help. Since they start with neither foundational knowledge about electrons nor measurement technique with a multimeter, their attempts to either research or measure their way towards coherent ideas were random and pretty fruitless. As usual, Brian sums it up — I had backed us into a corner where “this makes no sense and right now we have no tools for making sense of it.“
We are finally recovering (about 6 weeks later… *sigh*). Some useful things got accomplished in the mean time — noticing and measuring the discrepancies between meters, figuring out some things about batteries along the way (which will help in the next unit). Note for next time: start with atoms. Atoms are in concrete things like chairs and sweaters — it avoids the need to start with the jumble of ideas called “electricity” (power/charge/energy/voltage/current/potential/etc.). Also, give a quiz earlier on about meter technique. It helped students strengthen understandings that would have been helpful a month ago.
When engaging in a new strategy (whether for students or me), make sure it has some form of sense-making built-in.
Also, make sure the rest of life is not chaotic and stressful while doing these experiments. The existential angst can be a bit much.
I’ve been frustrated lately by my lack of focus and difficulty getting things done. After accidentally venting on my public blog (rather than the private one I intended to use … *sigh*) I realized there were a few factors at play that could shed some light on my students’ experiences.
The beginning of the year feels high-stakes to me because it’s the time when students are forming their first impressions, the time when expectations get set and rapport gets built. I’m not saying that those things can’t change over the course of the year. But I think it’s a lot easier to set an initial expectation than to correct it later, especially about my wacky grading system, my insistence that students “not believe their teachers,” and so on.
There are a bunch of fixes for this. One is to trust that my intro and orientation activities (videos, Marshmallow Challenge, name game, Teacher Skill Sheet, etc.) set good groundwork for productive classroom culture. These activities are well-defined — I can print out last year’s agenda and have a decent first week, which should lower the stakes on my successive lesson plans. Another is to document more carefully what I’ve done, so that next year, when I’m going batty with all the beginning of the year logistics, I don’t add lesson planning to the cognitive load.
How this applies to my students: There are lots of situations that they see as high-stakes and in which they underperform (or just procrastinate their way out of). Tests, scholarship applications, job applications. Tests are now pretty low-stakes, but it would be great to do the same for job applications, interviews, etc. — maybe by staging a series of “early drafts.”
2 — Success can cause fear of failure
I’m really proud of what my inquiry class accomplished last year. The same ideas about evaluating claims and making well-supported inferences run through not just the content but the process. The classroom culture was better than I could have expected. I want to do the same thing this year. The only problem is that it caught me so off guard last year that very little is documented (certainly no daily lesson plans or learning activities for the first couple of months — just jot notes of my impressions or student comments). It’s immobilizing to imagine doing it again without instructions — what if they fail to buy in to the entire inquiry approach?
It feels like there’s a narrow range of introductions that make everything work out, and if I miss it, I’ll have to go back to lecturing. Hey, stop that laughing! I know, I rail against my students’ unwillingness to do things without instructions. In my defense, there is a small difference: they can reassess their lab skill over and over within a few days. Whatever I do with my class, it affects their trust in me in ways that cannot be fully undone, and I don’t get to reassess that particular moment until next year.
Fix: document my learning activities thoroughly this year. Next year I might modify them or toss them out, but at least they’ll be there for those days when I just need to repeat something.
How this relates to my students: I’m not sure what to do here besides what I’m already doing: each assessment attempt is low-stakes, and there’s a wide range of possible good answers for almost everything. The feeling of having fluked into something can really mess with your head (even if, in my case, I think luck was a small element, dwarfed by hard work and obsessive preparation).
3 — No net.
It feels like there’s no net because the peer-reviewed research community” setup I’m using depends heavily on the good-will of the students. If any significant chunk decided to zone out, the system would not work. If there aren’t a critical mass of students writing up papers and giving feedback, then there simply is no course. If I had a group where absolutely no one was willing to make a good-faith effort, then I suppose I could lecture and assign problem sets (yes, I kept them from my first year). The reality is that that’s unlikely to happen. My students tend to be highly-motivated and with a wide age range (the oldest easily double the age of the youngest). They appreciate being trusted to think.
Fix: no fix needed. Especially in a group of 17 (as I have this year).
I wonder what kinds of things in my students’ lives feel like there is no net?
In the same vein as the last post, here’s a breakdown of how we used published sources to build our model of how electricity works.
- I record questions that come up during class. I track them on a mind-map.
- I pull out the list of questions and find the ones that are not measurable using our lab equipment, and relate to the unit we’re working on.
- I post the list at the front of the room and let students write their names next to something that interests them. If I’m feeling stressed out about making sure they’re ready for their impending next courses/entry into the work world, I restrict the pool of questions to the ones I think are most significant. If I’m not feeling stressed out, or the pool of questions aligns closely with our course outcomes, I let them pick whatever they want.
- The students prepare a first draft of a report answering the question. They use a standard template (embedded below). They must use at least two sources, and at least one source must be a professional-quality reference book or textbook.
- I collect the reports, write feedback about their clarity, consistency and causality, then hand back my comments so they can prepare a second draft.
- Students turn in a second draft. If they have blatantly not addressed my concerns, back it goes for another draft. They learn quickly not to do this. I make a packet containing all the second drafts and photocopy the whole thing for each student. (I am so ready for 1:1 computers, it’s not funny.)
- I hand out the packets and the Rubric for Assessing Reasoning that we’ve been using/developing. During that class, each student must write feedback to every other student. (Note to self — this worked with 12 students. Will it work with 18?)
- I collect the feedback. I assess it for clarity, consistency, and usefulness — does it give specific information about what the reviewee is doing well/should improve. If the feedback meets my criteria, I update my gradebook — giving well-reasoned feedback is one of the skills on the skill sheet.
- If the feedback needs work, it goes back to the reviewer, who must write a second draft. If the feedback meets the criteria (which it mostly did), then the original goes back to the reviewer, and a photocopy goes forward to the reviewee. (Did I mention I’m ready for 1:1 computers?)
- Everyone now works on a new draft of their presentation, taking into account the feedback they got from their classmates.
- I collect the new drafts. If I’m not confident that the class will be able to have a decent conversation about them, I might write feedback and ask for another draft. (Honest, this does not go on forever. The maximum was 4, and that only happened once.) I make yet another packet of photocopies.
- Next class, we will push the desks into a “boardroom” shape, and some brave soul will volunteer to go first. Everyone takes out two documents: the speaker’s latest draft, and the feedback they wrote to that speaker.
The speaker summarizes how they responded to people’s feedback, and tells us what they believe we can add to the model. We evaluate each claim for clarity, consistency, causality. We check the feedback we wrote to make sure the new draft addressed our questions. We try to make it more precise by asking “where,” “when,” “how much,” etc. We try to pull out as many connections to the model as we can. The better we do this, the more ammo the class will have for answering questions on the next quiz.
Lots of questions come up that we can’t answer based on the model and the presenter’s sources. Sometimes another student will pipe up with “I think I can answer that one with my presentation.” Other times the question remains unanswered, waiting for the next round (or becoming a level-5 question). As long as something gets added to the model, the presenter is marked complete for the skill called “Contribute an idea about [unit] to the model.”
We do this 4-5 times during the semester (once for each unit).
Example of a student’s first draft
I was pretty haphazard in keeping electronic records last semester. I’ve got examples of each stage of the game, but they’re from different units — sorry for the lack of narrative flow.
This is not the strongest first draft I’ve seen; it illustrates a lot of common difficulties (on which, more below). I do want to point out that I’m not concerned with the spelling. I’ve talked with the technical writing instructor about possible collaborations; in the future, students might do something like submit their paper to both instructors, for different kinds of feedback. I’m also not concerned with the informal tone. In fact, I encourage it. Getting the students to the point where they believe that “someone like them” can contribute to a scientific conversation, must contribute to that conversation, or indeed that science is a conversation, is a lot of ground to cover. There is a place for formal lab reports and the conventions of intellectual discourse, but at this point in the game we hadn’t developed a need for them.
Feedback I would write to this student
Source #1: Thanks for including the description of what the letters mean. It improves the clarity of the formula.”
Source #2: It looks like you’ve used the same source both times. Make sure to include a second source — see me if you could use some help finding a good one.
Clarity: In source #1, the author mentions “lowercase italic letters v and i…” but I don’t see any lower case v in the formula. Also, source #1 refers to If, but I don’t see that in the formula either. Can you clarify?
Cause: Please find at least one statement of cause and effect that you can make about this formula. It can be something the source said or something you inferred using the model. What is causing the effect that the formula describes?
Questions that need to be answered: That’s an interesting question. Are you referring to the primary and secondary side of a transformer? If so, does the source give you any information about this? If you can’t find it, bring the source with you and let’s meet to discuss.
Common trouble spots
It was typical for students to have trouble writing causal statements. I’m looking for any cause and effect pair that connect to the topic at hand. I think the breadth of the question is what makes it hard for students to answer. They don’t necessarily have to tell me “what causes the voltage of a DC inductor to be described by this formula” (which would be way out of our league). I’d be happy with “the inductor’s voltage is caused by the current changing suddenly when the circuit is turned on,” or something to that effect. I’m not sure what to do about this, except to demonstrate that kind of thinking explicitly, and continue giving feedback.
It was also common for students to have trouble connecting ideas to the model. If the question was about something new, they would often say “nothing in the model yet about inductors…” when they could have included any number of connections to ideas about voltage, current, resistance, atoms, etc. I go back and forth about this.
In the example above, I could write feedback telling the student I found 5 connections to the model in my first three minutes of looking, and I expect them to find at least that many. I could explicitly ask them to find something in the model that seemed to contradict the new idea (I actually had a separate section for contradictions in my first draft of the template). That helped, but students too often wrote “no contradictions” without really looking. Sometimes I just wait for the class discussion, and ask the class to come up with more connections, or ask specific questions about how this connects to X or Y. This usually works well, because that’s the point at which they’re highly motivated to prevent poorly reasoned ideas from getting in to the model. Still thinking about this.
Example Student Feedback
(click through to see full size)
I don’t have a copy of the original paper on “Does the thickness of wire affect resistance,” but here is some feedback a classmate wrote back.
Again, you can see that this student answered “What is the chain of cause and effect” with “No.” Part of the problem is that this early draft of the feedback rubric asks, in the same box, if there are gaps in the chain. In the latest draft, I have combined some of the boxes and simplified the questions.
What’s strong about this feedback: this student is noticing the relationship between cross-sectional area of a wire (gauge), and cross-sectional area of a resistor. I think this is a strong inference, well-supported by the model. The student has also taken care to note their own experience with different “sizes” of resistor (in other words, resistors of the same value that are cross-sectionally larger/smaller). Finally, they propose to test that inference. The proposed test will contradict the inference, which will lead to some great questions about power dissipation. Here the model is working well: supporting our thinking about connections, and leading us to fruitful tests and questions.
Example of my first draft
Sometimes I wrote papers myself. This happened if we needed 12 questions answered on a topic, but there were only 11 students. It also happened when we did a round of class discussions only to realize that everyone’s paper depended on some foundational question being answered, but no one had chosen that question. Finally, I sometimes used it if I needed the students to learn a particular thing at a particular time (usually because they needed the info to make sense of a measurement technique or new equipment). This gave me a chance to model strong writing, and how to draw conclusions based on the accepted model. It was good practice for me to draw only the conclusions that could be supported by my sources — not the conclusions that I “knew” to be true.
I tried to keep the tone conversational — similar to how I would talk if I was lecturing — and to expose my sense-making strategies, including the thoughts and questions I had as I read.
In class, I would distribute my paper and the rubrics. Students would spend the class reading and writing me some feedback. I would circulate, answering questions or helping with reading comprehension. I would collect the feedback and use it to prepare a second draft, exactly as they did. If nothing else, it really sold the value of good technical writing. The students often commented on writing techniques I had used, such as cutting out sections of a quote with ellipses or using square brackets to clarify a quote.
Reading student feedback on my presentations was really interesting. I would collect their rubrics and use it to prepare a second draft. The next day, I would discuss with them my answers and clarifications, and they would vote on whether to accept my ideas to the model. At the beginning of the year they accepted them pretty uncritically, but by the end of the year I was getting really useful feedback and suggestions about how to make my model additions clearer or more precise.
I wish I had some student feedback to show you, but unfortunately I didn’t keep copies for myself. Definitely something I will do this year.
How It’s Going
I’m pretty satisfied with this. It might seem like writing all that feedback would be impossible, but it actually goes pretty quickly.
Plan for improvement: Insist on electronic copies. Last year I gave the students the choice of emailing their file to me or making hard copies for everyone and bringing to class. Because bringing hard copies bought them an extra 12 hours to work on it, many did that. But being able to copy and paste my comments would help. Just being able to type my comments is a huge time-saver (especially considering the state of my hand-writing).
The students benefit tremendously from the writing practice, the thinking practice and, nothing to sneeze at, the “using a word-processor correctly” practice. They also benefit from the practice at “giving critical feedback in a respectful way,” including to the teacher (!), and “telling someone what is strong about their work, not just what is weak.” If their writing is pretentious, precious, or unnecessarily long, their classmates will have their heads. And, reading other students’ writing makes them much more aware of their own writing habits and choices.
I’m not grading the presentation, so I don’t have to waste time deliberating about the grade, or whether it’s “good enough.” I just read it and respond, in a fairly conversational way. It’s a window into my students’ thinking that puts zero pressure on me, and very little pressure on the students — it’s intellectually stimulating, I don’t have to get to every single student between 9:25 and 10:20, and I can do it over an iced coffee on a patio somewhere. I won’t lie — it’s a lot of work. But not as much work as grading long problem sets (like I did in my first year), way more interesting, and with much higher dividends.
MS Word template students used for their papers
Rubric students used for writing feedback. Practically identical but formatted for hand-written comments
I promised, months ago, to write about
an example of a measurement cycle, including how I chose the questions, why they arose in the first place, and how students investigated them
I’ve tried all summer to write this blog post and failed, mostly because I’m discovering the weaknesses in my record-keeping. I’m going to answer as much of the question as I can, then make a few resolutions for improving my documentation.
Last year in DC Circuits (then in AC Circuits, and in Semiconductors 1 and 2), our days revolved around building and refining a shared model of how electricity works. There were two main ways we built the model:
- measuring things, then evaluating our measurements (aka “The Measurement Cycle”)
- researching things, then evaluating the research (aka “The Research Cycle”)
The measurement cycle
- In the process of evaluating some research or measurements, new questions come up. I track them on a mind-map (shown above).
- When I’m prepping our shop day, I pull out the list of questions and find the ones that are measurable using our lab equipment.
- I choose 4-6 questions. I’m ideally looking for questions that have obvious connections to what we’ve done before, that generate new questions, and that are significant in the electronics-technician world (mostly trouble-shooting and electrical sense-making).
- Things I think about: What are some typical ways of testing these questions? Do the students know how to use the equipment they will need? Is it important to have a single experimental design, or can I let each lab group design their own? Is there a lab in the lab-book with a good test circuit? Is there a skill on the skill sheet that will get completed in the course of this measurement? The answers to these questions will become my lesson plan.
- At the beginning of the shop period, I post the questions I expect them to answer and skills I expect them to demonstrate. We have a brief discussion about experimental design. Sometimes I propose a design, then take suggestions from the class about how to clarify it or improve it. Sometimes I ask the lab groups to tell me how they plan to test the question. Sometimes, I just ask for a “thumbs up/down/sideways” on their confidence that they can come up with a design and, if they’re confident, I turn them loose.
- If they will need a new tool to test the questions, we develop and document a “Hazard ID and Best Practice” for that tool. (More on this soon…)
- The students collect data — one data point for each question. When they finish (and/or, if they have questions), they put their names up on the white board.
- When a group finishes, they have to walk me through their data. I check their lab record against our “best practices for shop notebooks” (an evolving collection of standards generated by the class), and point out where they need to clarify/make changes. If their measurement process has demonstrated a skill that’s on the skill sheet, I sign it off. Then I take pictures of their lab notes, and they are done for the day. I run the pics through a document scanning app and generate a single PDF.
- On our next class day, everyone gets a copy of the PDF. I break them into 4-6 groups, one for every question they tested. No lab partners together in a group. Each group analyzes everyone’s data for a single question, makes a claim, and presents it to the class. The class helps the presenters reconcile any contradictions, then they vote on whether to accept the idea to the model. This process generates lots of new questions, some of which can’t be answered. They go on the list for next week.
- Repeat for 15 weeks.
Students were evaluating their measurements to figure out “What happens to resistance when you hook multiple wires together?” Here’s the whiteboard they presented to the class. Lots of good stuff going on here: they’re taking note of the effect meter settings have on measurement, noticing that wires have resistance (even though they’re called “conductors,” not “resistors”), and they’re able to realize that the meter measures the resistance of the leads, as well as what’s connected to the leads. In case you can’t read their claim, it says “Longer or more leads we connect and measure the resistance, more resistance we get.”
Questions students were curious about
Here’s where this inquiry-style stuff pays me dividends: I’m anticipating the path of future questions, and I’m thinking maybe it will be “what happens when you hook things up in parallel or in other arrangements.” I am so wrong. The next question is, “is it exactly proportional?” Whoa. I love that they’re attending to the fact that things aren’t always proportional.
The next question surprises me even more. It’s “If this works for test leads, does it work for light bulbs/hookup wires/resistors too?“
I was kind of stunned by that. At this point, the model includes the idea that resistance varies with length, cross-sectional area, and material. This should lead us to expect different amounts of resistance from different materials, but not entirely different patterns of variance. Especially between test leads and hookup wires!
On one hand, I’m afraid this means they think that light bulbs and hookup wire somehow obey fundamentally different physical laws than test leads. Their willingness to imagine the universe as disconnected and patternless offends against my sense of symmetry, I guess. I get over myself and realize that it’s awesome that they want their own sense of symmetry to be based on their own observations. So, I add it to the list of questions for the following week. I mentally curse the lab books of the world, which would have hustled the students past this moment without giving them a chance to notice their own uncertainty, which would then end up buried in their heads, a loose brick in the foundation of their knowledge, practically impossible to excavate.
How they investigated
The following week, we investigate “Is the change in resistance exactly proportional” and “do other materials do the same thing.” In our beginning-of-class strategy session, I tease out what exactly they want to know. Are they asking if the change in resistance is exactly proportional to … length? number of leads? What? They want to know about length, so that’s settled. There are lots of other questions on the docket that day, including
- Is there resistance in a terminal block?
- Can electrons get left behind in something and stay there? [I think this is a much more interesting way to ask it than the textbook-standard "Is current the same everywhere in a series circuit"]
- If electrons can get stuck, would it be a noticeable amount? Is that why a light dims when you turn it off? Are they getting lost or are they slowed down?
- Can more electrons pass through a terminal block than a wire?
- If you connect light bulbs end-to-end, we expect the total resistance to go up, but what will happen to the current? Is it the same as with one bulb? Will there be less electrons in bulb 3 than in bulb 1? Will the bulbs be dimmer as you go along?
They’re confident using the tools and materials, so I let them design their experiments however they want.
Some very cool experiments resulted. To check if total resistance in series was additive, one group used light bulbs, one used resistors, and (my favourite) one group removed two entire spools of hookup wire from the storage cabinet and measured the resistance of the spools separately, then connected in series (as shown).
This generated some odd data and some experimental design difficulties: there was no easy way to figure out the length of the spool. They could still tell that their data was odd, though, because the spools appeared visually to be about the same length, so whatever that length was, they should have roughly twice as much of it (short pause to appreciate that the students, not the teacher or textbook, made this judgement call about abstraction). Or, at least, the resistance should be more than that of one spool.
And that’s not what happened. If you look closely at the diagram, each spool appears to have 3 ends… Note that the sentence at the bottom shows that they distrust their meter. However, they did not fudge the data, despite not believing it was right. I believe that this is my reward for not grading lab books. Wait — not grading lab books is my reward for not grading lab books!
In the following class, this experiment generated no additions to the model but a mother lode of perplexity. It also resulted in demands for a standardized way of recording diagrams [Oh OK, since you insist...], and questions about what happens when you hook up components “side-by-side” instead of “one after the other.” And we’re off to the races again.
Speaking of standards for record-keeping…
It was really difficult to find the info for this blog post, because my record-keeping system last year was not designed to answer the question “how did questions arise.” It was intended to answer the question, “Oh God, what the heck am I doing?”
Some changes that will help:
- Using Jason Buell’s framework to keep whiteboards organized in Claim/Evidence/Reasoning style
- In the PDF record of students’ measurements, including a shot of the front-of-class whiteboard where I recorded the agenda
- Giving meaningful names to those PDF files. “20110928 lab data” is not cutting it.
- During class discussion, recording new questions next to the idea we were discussing when the question came up. Similarly, on the mind-map, attaching new questions to the ideas/discussions that generated them..
- Keeping electronic records of the analysis whiteboards (step 9 above), not just the raw data. Maybe distribute these to students as well, to have a record for their notes when we inevitably have to revisit old ideas and re-evaluate them in light of new evidence.
Here’s a snapshot of our model as it existed around March — I have removed all student names for privacy, but I would normally track who proposed or asked what, and keep notes about the context in which questions arose.
We gathered evidence from our own measurements and from published sources. The students proposed most of the additions to the model, but occasionally I proposed one. I’ll write in detail about each method next — this post covers some basic ideas common to all approaches. Additions to our shared mental model of electricity tended to be short, simple conceptual statements, like “An electron cannot gain or lose charge” or “Between the nucleus and the electrons, there is no ‘stuff.’”
Here are the ground rules I settled on (though introduced gradually, and not phrased like this to the students). I was flying by the seat of my pants here, so feel free to weigh in.
- Every student must contribute ideas to the model.
- For an idea to get added to the model, it must get consensus from the class.
- Deciding to accept, reject, or change a proposal must be justified according to its clarity, precision, causality, and coherence (not “whether you like the idea/presenter”).
- Each student is responsible for maintaining their own record of the model.
- Students may bring their copy of the model to any quiz.
- Quizzes will assess whether students can support their answers using the model.
1. What if the class rejects someone’s proposal?
Adding an idea to the model is a skill on the skill sheet. As with any other skill, students get lots of feedback, then they improve. They have until the end of the semester. I don’t allow them to move to the “class discussion” stage until I’m satisfied that they have a well-reasoned attempt that addresses previous feedback with a solid chance of getting something accepted. Unlike other skills, though, they’re getting individual feedback sheets from each of their classmates, not just me. Most people wrote two drafts. No one wrote more than three. More on this soon.
2. Consensus from the class — are you kidding?
I had a small class this year (12 students at the high-water mark) and I’m not sure how it will work with 20-24 next year — I’m thinking hard about how to scale this. But yes, it really worked. We used red-green-yellow flashcards to vote quickly. I did not vote. My role was to
- Teach them how to have a consensus-focussed conversation
- Point out supporting or conflicting ideas in the model, if the students didn’t — in other words, to “keep the debate alive [if] I’m not convinced they have compelling arguments on either side”
- Do some group-dynamics facilitation, such as recording new questions that come up (for later exploration) as a result of the discussion, making sure everyone gets a chance to speak and is not interrupted, and sometimes doing some graphical recording on the whiteboard if the presenting students are too busy answering questions to do their own drawing.
- Determine when the group is ready to move to a vote or needs to table the conversation
That made me a cross between the secretary and the speaker of the house. Besides being productive, it was fun. A note about group dynamics facilitation: I’m often frustrated by the edu-fad of re-labelling teachers “facilitators.” I’ve done a lot of group dynamics facilitation for community groups. It is a different role than teaching, and it’s disingenuous to pretend that students need only facilitation, not teaching. However, in this situation, facilitation was called for. The group had a goal (to accept or reject an idea for the model) and they had a process (evaluate clarity, precision, etc. etc.); my job was to attend to the process so they could attend to the goal.
3. What about people blocking consensus out of personal dislike, and other sabotage maneuvers?
There is a significant motivation for students to take part in good faith. First off, no one wants these conversations to go on forever: they’re mentally challenging and no one has infinite stamina. Second, if a well-supported, generative idea is left out of the model, no one will be able to use it on the next quiz. Third, if the class accepts a poorly supported idea, it will cause a huge headache down the road when ideas start to contradict; we’ll have to backtrack, search out the flaw in our reasoning, and then uproot all the later ideas that the class accepted based on the poorly reasoned idea. They were darned careful what they allowed into their model.
Other than that, I used my power as the facilitator. Conflict happens in any group, and the usual techniques apply. It was almost always possible to resolve the conflict fairly quickly: modifying the wording, adding a caveat. We’d vote again, the idea would get accepted, and we’d move on. Sometimes a student felt strongly enough about their objection to propose an experiment that we should try; unless the group could come up with a reason why we shouldn’t do that experiment, we’d just put the idea on hold, and I’d add that experiment to the list for that week.
Once, a student voted against a proposal but was unable to explain what they needed made clearer, more precise, more causally linked, etc. It just “didn’t sound right.” None of my usual techniques worked to draw out of that student what they needed to be confident that the proposal was well-reasoned, or just to feel heard. So I reminded them that the model is always modifiable, that we can remove ideas as well as add them, and that we have committed to base our decisions on the best-quality judgement we can make, not “truth” or “feeling.” I told them that I would consider the idea accepted for the purposes of using it on quizzes, but record the disagreement, and that if/when new information came up, we would revisit it.
An important point about facilitation: these conversations were sometimes fun and lighthearted, but sometimes gruelling for the students, especially as we moved into more abstract and unintuitive material. The most important mistake I made was letting the conversation drag. To remedy this, I used what I consider fairly aggressive facilitation — quickly redirecting rambling speakers, proposing time limits, summarizing and restating sooner than I otherwise might, etc. If the conversation was so unclear that students weren’t able to even give feedback about how to improve, I had to diagnose that as soon as possible. I would say something like “It looks like we’re not ready to move forward here. Joe, let’s meet after class and see if we can figure out how to strengthen your proposal.”
4. Students maintain their own record of the model
- Students had a reason to take decent notes, at least about certain key ideas.
- The list got really, really, long , and included a lot of topics, many of linked to each other. It was a powerful visual indicator of just how huge of an endeavour this really is.
- Most ideas fit clearly under more than one category. It was up to the students to choose. It was a good reminder that the new ideas aren’t divorced from the old ideas, that all the chapters in the textbook really do connect.
5. Every test is “open notes”?
In a way, yes. I felt a bit weird about this — there are some things they really need memorized. But it worked out really well. I allow students to bring their copy of the model to any quiz (no other notes). It must contain only the ideas voted on and accepted by the class — no worked problems, etc. I circulate during the quiz and randomly choose some paper copies to take a close look at. It was a non-issue.
About halfway through the semester, students gradually stopped bringing it. We used that thing so much, for so many purposes, that they mostly didn’t need it. Besides, like any quiz, if you have to look everything up, you won’t have time to finish. Then you’ll have to apply for reassessment, which means doing some practice problems, which means building speed and automaticity, which means needing the notes less.
6. “You’re going to grade us based on what we say?!”
I modified many quiz questions so that they said things like
Explain, using a chain of cause and effect, one possible reason for the difference in energy levels on either side of S1. It doesn’t have to be “right” – but it must be backed up by the model and not have any gaps.
This worked extremely well. It helped students enter the tentative space between “I get it” and “I don’t get it,” saying things like “Based on our model, it is possible that…”. It gave me the opportunity to show a variety of well-reasoned answers (I sometimes used excerpts from student answers to open conversation in a later class). It helped me banish my arch nemesis: the dreaded “But You Said.” (Because I didn’t say. They did.)
It’s been 4 whole months since I wrote about my classroom, and I had fallen far behind long before that. The short story is this: in September, I fell sideways into inquiry-based teaching. Since then,
- I learned a bunch of tough lessons (ex: my students couldn’t tell a cause from a definition)
- I got a lot more honest with myself about how well my teaching actually works (ex: I do more pseudoteaching than I thought I did)
- I fostered a classroom culture that was way more honest than in the past (by attributing authorship, letting student questions direct our activities, sharing results of regular class feedback, direct-teaching them how to respectfully disagree with the teacher, etc. The increased honesty is where the hard lessons came from)
- I learned that teaching 5 preps in five months, using an educational approach that I hadn’t anticipated, makes me so sleep-deprived that I am incapable of synthesizing my thoughts into readable blog posts
- I changed my mind about a bunch of things (ex: I used to think that any student who attends class, works hard, and uses the resources available to them will complete the program. Hold the tomatoes.)
- I noticed a bunch of things that I hadn’t realized I didn’t know (ex: I’m not sure exactly what I want my students to capture in their class notes; there are some shop activities where I’m not completely sure what question I intend for them to answer).
It was uncomfortable and sometimes I couldn’t tell if I was “doing it right.” In other words, I practiced what I preached. I spent a lot of sleepless Sunday nights, worrying that I wasn’t good enough to pull this off and that I’d mess up my students’ minds, or at least their careers. (I eventually figured out how to judge that my skills, though imperfect, are up to the task. That’s a post for another day.)
Last week, my second-year students came back from work-terms with glowing reviews. The employers wrote specifically about students’ discernment in asking significant question without needing continual reassurance, their competence in tackling unfamiliar tasks, their ability to make sense of technical text.
The 2nd-year students reported feeling confident and well-prepared. I got a visit in my office from a student who had been a vocal critic of my increasingly “weird” teaching. He shook my hand, looked me in the eye, and told me that he appreciated how well the tasks he performed in class reflected the industry. He has just aced the employer’s entrance test on the first try.
The 1st-year students did well on their final project (an FM transmitter), becoming increasingly self-directed in developing test procedures, troubleshooting systematically, and recording their results (including migrating their lab notes from paper to Excel and Visio). Their feedback is positive and constructive. Here are their thoughts on what’s working well:
- “The teaching aspects that are new to me.”
- “Very dedicated teachings from an involved and thorough instructor.”
- “Learning new concepts”
- “The skill sheets”
The only suggestion about what to change (other than “nothing”) was the balance between theory and shop time. I agree. In the last 5 weeks, I collapsed back into lecture mode, mostly because I was tired and couldn’t figure out what to do instead. I have ideas for next year.
So this post is my way of saying hello, and keeping track of some things I plan to write about next. In response to some long-ago requests, I’m working on posts about
- an example of a measurement cycle, including how I chose the questions, why they arose in the first place, and how students investigated them
- an example of a research cycle, including topics students presented and topics I presented
- an example of how I assessed students’ critical thinking skills, including drafts of students’ writing and the kinds of feedback I gave
There are lots of other things in the hopper but I probably need to do these first for other topics to make sense. If you notice something that I’ve left out or skipped over, your suggestions would be very welcome, as I try to organize this into a coherent story.