Whose model do I assess?

My first-year students have bowled me over and taken off down the inquiry learning path.  I’m running to catch up.  They’re developing a model of what atoms are made of and why electrons move (along with other awesome questions like what does it mean for subatomic particles to “touch”, nuclei really can move even though the book says no, huh?  and holy crap are we made out of a whole lot of nothing?!).

Luck favours the prepared mind, and I was prepared to bust up the idea that teachers and textbooks contain a substance called “truth” before which all minds must bow.  I’ve been working on coaching them in critical thinking skills like clarity, preciseness, noticing whether ideas support each other or contradict, and logical implications.  We predict things, measure things, and check a variety of sources.  Slowly, we’re coming around to the idea that there is no single correct model — there are a variety of models, useful for various purposes, and that the goal is to judge their relative quality, not to award the single prize of truth.  They are not accountable to me — they are accountable to reality.

This whole epistemic experiment goes in the trash if I give a quiz at the end of the week where I expect a “correct” answer according to the textbook.  Obviously, I have to make sure they have a wide enough variety of experiences and enough coaching in thinking through them that they arrive at a workable model.  So far this has been a non-problem.  By far the hardest thing we’ve done is think through the implications of a model while checking in with our previous knowledge, without conflating the two.  I can’t think of anything more useful to do with our time.

I think this means I have to stop assessing their ability to solve questions from the back of the chapter, and start assessing whether they have applied their own model intelligently.

As the students put it, “you’re going to grade us based on what we say??”  I can’t see any other way forward.  And I think I like it.

4 comments

  1. Yup. All I care about is they can defend their answer with evidence. Eventually…I expect that they’ll get to the “right” answer. At least at my level, they don’t get into anything subtle enough that a mountain of evidence can be explained equally well by different models.

    If they don’t get there eventually, either I haven’t done my job asking the right questions or they’re fitting their evidence to their conclusion.

    • I think I’m starting to get it. Your post about the cycle helped — especially the part about sorting out the difference between reasoning through the implications of the model, and predicting what they “think” will happen. They come up with incredible metaphors and I don’t want to squash them but we need a way to stay aware of the distinction. I’ve tried double-entry diaries (where they record both the model’s predictions and their own thinking) to help them keep them separate, but it’s still really difficult for students to do. They are also having a hard time resisting the temptation to repeat reference sources that they don’t understand. Still working on it…

  2. Ready-made questions in texts, ready-made tests / multiple choice quiz format are publishers’ marketing tools. These have nothing to do with teaching kids or assessing / evaluating student knowledge. What you are doing is harder much more valid.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s