This semester, I turned over the DC Circuits course to the questions my students asked. I started because their questions were excellent. But I continued because I found ways to nurture the question-creation in ways that both introduced students to “thinking like a tech” and, not coincidentally, “covered” all the curriculum. First we needed a shared purpose: to be able to predict “what electrons are doing” in any DC circuit by the end of the semester. Next, we needed to generate questions Here are a few examples of how it happened.
Class discussion parking lot
In the first few days of the semester, we brainstormed a list of ideas, vocab, and anything else anyone remembered about magnets, then about atoms. I asked questions, introducing the standards for reasoning by asking “What exactly do you mean by … ” or “How much…” or “How could idea A and idea B be true at the same time?” Any time we reached “I don’t know” or a variety of apparently contradictory answers, I phrased it as a question and wrote it down. This turned out to be a useful facilitation technique, to be used when students were repeating their points or losing patience with a topic. I checked with the class that I had adequately captured our confusion or conflicting points, and stored them for later. Two days into the course we had questions like these:
- What does it mean for an electron to be negative? Is it the same as a magnet being negative?
- Does space have atoms?
- Can atoms touch?
- We think protons are bigger than electrons because they weigh more, but do they actually take up more volume?
I continued throughout the semester to gather their questions; I periodically published the list and asked them to choose one to investigate (or, of course, propose another one). We never ran out.
I assess their reasoning
At the beginning, I waded gradually into assessing students’ reasoning. We started with some internet research where everyone had to find three sources that contributed to our understanding of our questions; I asked them to use Cornell paper to record what their source said on the right, their own thoughts (questions, connections to their experience, visuals, whatever) on the left, and a summary at the bottom. (Later I did this with a Googledoc, but I went back to paper because of the helpfulness of sketches and formulas).
I collected these and wrote feedback about clarity, precision, logic, and the other criteria for assessing reasoning. “What do you mean by this exactly?” “How much more?” “Does the source tell you what causes this?” “Do you think these two sources are contradictory?” “Have you experienced this yourself?” I also kept track of all the questions they asked and added them to the question list. Here’s an example, showing a quote from the source (right) and the student’s thinking (left) with my comment (bottom right).
There’s a lot of material to work with here: finding parallels between magnetic and electric fields; what the concept of ground means exactly; and an inkling of the idea that current is related to time. I love this example because the student is working through technical text, exploring its implications, comparing them to his base knowledge, and finding some “weirdness.” I mean, who ever heard of a magnet getting weaker because it was lying on the ground rather than sticking to the fridge? Weirdness is high on my priority list for exploring questions.
I continued periodically to ask students to record both a published source and their thoughts/questions. There’s something about that gaping, wide blank column on the left that makes people want to write things in it.
Students respond to my assessments
The next assignment was for students to pencil in some comments responding to my comments. This got the ball rolling; they started to see what I was asking for. They also loosened up their writing a bit; there’s something about writing diagonally in the corner of a page that makes people disinclined to use 25-cent words and convoluted sentence structure. Exhibit A is the same student’s response to my question:
Ok, so this student is known for flights of fancy. But there’s even more to work with here: air as an insulator; the idea of rapid transfers of electrons when they come in contact with certain materials — as if the electrons are trying to get away from something; an opportunity to talk about metaphors and what they’re good for.
This exercise also set the stage for the idea that the comments I leave on their assignments are the beginning of a conversation, not the end, and that they should read them. Finally, it generated questions (in their responses). I was pretty broad in my interpretation of a question. If a student claimed that “It’s impossible for X to the same as Y,” and couldn’t substantiate it in response to my question, it would end up in the question list as “Can X be the same as Y?”
They assess a published source’s reasoning
The information the students found fell into four broad categories. I printed copies of the results for everyone. On the next day, students broke into four groups with each group analyzing the data for one category. They had to summarize and evaluate for clarity, precision, lack of contradictions, etc. I also asked them to keep track of what the group agreed on and what the group disagreed on. As I circulated, I encouraged them to turn their disagreements into questions.
I assess their assessments
I gave groups live feedback by typing it into a document projected at the front of the room while they were working. They quickly caught on that I was quoting them and started watching with one eye. I lifted this group feedback model wholesale from Sam Shah, except that my positive feedback was about group behaviours that contributed to good-quality reasoning (pressing each other for clarification, noticing inferences, noticing contradictions, etc.).
Rubric for Assessing Reasoning
I read their submissions and wrote back. Next day, they had to get back into their groups, consider my feedback, and decide whether to change anything. Then I asked them to present to their classmates, who would be assessing the presentations. I knew they would need some help “holding their thinking” so I made this “reading comprehension constructor” (à la Cris Tovani) based on our criteria for evaluating thinking: it’s a rubric for assessing reasoning.
If you look closely you’ll see that three new criteria have appeared, in the form of check boxes; they are criteria about the source overall, not about the quality of reasoning. Is the source reviewed by experts? Is the source recent (this forces them to begin looking for copyright dates)? Is the source relevant to our question? I asked that they carefully consider the presentations, and accept only the points that met our criteria for good quality reasoning. Each student filled out a rubric for each presentation. Rubrics were due at the end of class.
Voila: more questions.
Conclusion: Attributing authorship helps
I suspect that my recording of who asked which questions is part of what makes this work (see this post by John Burk for an interesting video of John’s colleagues discussing this idea with Brian Frank). The students know and trust that I will attribute authorship of their ideas now; it seems to make them more willing to entrust their questions to me. They’ve started saying things like “I don’t want to reject this idea from the model, I think it’s a good starting point, but next I think we need to know more about what voltage is doing exactly that causes current. Could you add that question to the list?”