So, I just realized that my last blog post is really the same thing I’ve written twice before. Clearly I’m still trying to wrap my head around it. In an attempt to stop myself from repeating the same stuff in the same stunned tone, here are some nuts and bolts of how our week runs.
On Friday, we have a 3-hour shop period. That’s when we gather most of our new info. Students work in twos (we have a total of 6 pairs… yes, I am infinitely grateful for my tiny class). I choose 4-6 questions that have come up that week. Each lab pair must contribute one data point for each question, either by measuring or looking something up (I don’t specify, and so far they’ve made sensible judgements about this). They write them up in a lab journal. At the end of the day, I take pictures of everyone’s lab journal. I CamScan all the documents (an idea I picked up from Gas Station Without Pumps’ comment on that post), turn everyone’s data into a single PDF, and make copies for every student.
On Monday, we have a double period. If we’re ready for a quiz, I give it in the first hour. Then I break students into groups of 3 — groups are fluid and chosen by the students each week but cannot include your lab partner. The students get copies of everyone’s lab data and each group summarizes a single question, evaluating the information on the same grounds as I use when evaluating their work: precision, clarity, internal consistency, cause and effect, connections to the model. On Thursday, the groups present their findings. Whatever meets with consensus from the class gets added to the model. I don’t vote, but I do ask questions. If I think they’re being unclear, jumping to conclusions, or contradicting themselves, I ask questions to draw that out — whether their point was “right” according to the textbook or not.
Often we will accept a group’s proposed addition to the model (“the more resistance a component has, the more voltage is dropped across it”) but it generates new questions (are they exactly proportional?). Those become our questions for Friday’s investigation.
How This Helps Students Hold Each Other Accountable
- Students have to read each others’ lab notebooks. I have never seen lab notebooks improve so much so quickly. Examples are shown above comparing the beginning of the year to where we are now.
- Students hold each other accountable for the quality of their notebooks. Poorly organized notebooks are annoying and slow to read; students are often walking across the room to ask each other to decrypt their notes. This provides instant feedback about the difference between notes that are understandable to oneself and notes that are understandable to others.
- It creates an authentic audience for lab notes (I got fired up about this thanks to Joss Ives’ post about it).
- It motivated the introduction of standard schematic symbols, drafting conventions, and criteria for deciding when to use a paragraph and when to use a table (I just waited until I heard this six times in an hour: “Aaaaargh!! Everyone should write things down the same way!”)
- It had the unprecedented result that students wrote the date and a title at the top of each page of notes. (This was the result of repeated exclamations like “Grrr! I can’t even tell what question they’re trying to answer here!!”)
- I don’t grade lab notebooks. Students improve their notebooks because they see one they liked and decide to emulate it; because they have to read one that’s awful and they don’t want to inflict that on each other; and because their classmates will interrupt them to demand clarification when data are confusing or can’t be found.
- The students see the value of the approach: it’s less work to analyze 6 data points for a single question than to try to make sense of 1-2 data points for 6 questions or (heaven forbid) having to take 5-6 measurements for each question themselves.
- Assumptions and interpretations are more likely to come to the fore. If a student takes 5-6 measurements themselves, they are likely to reproduce the same assumption each time, resulting in a consistent set of data that doesn’t challenge their expectations. If they have to analyze others’ data and integrate it with their own, different interpretations become more apparent. That’s why they analyze data in groups with representatives of other lab pairs: I hope it reduces the risk that they will simply conclude that “the other groups were wrong.”
- Even though students are only analyzing the data for one of the questions, they’ve tested all of them, and they will evaluate their peers’ presentation of the ones they didn’t analyze.
- The students hold each other accountable for their data analysis. They know that if a poorly reasoned idea gets into their model, it will cause confusion, incorrect predictions, and lost time down the road when they have to take it out and change it. So they are cautious in accepting new ideas, rigorous in questioning each other, accepting of the necessity of repeating trials occasionally, and supportive in helping other groups find connections and causes.
Payoff: Predictive Power
We had a major payoff last week. The model predicted that an open switch in a series circuit would measure the supply voltage across it. This baffled almost everyone; no one was able to propose a sequence of cause and effect that made that make sense. They concluded that the model must not work for open circuits. When they tested the circuit, of course, they were flabbergasted to find that the model was right. Our model is able to predict things for which we have no explanation. Their descriptions of this: “strong,” “awesome,” “spooky.”