My inquiry-based experiments forced me to face something I hadn’t considered: my lack of confidence in the power of reasoning.*  I spent a lot of time worrying that my students would build some elaborately misconstrued model that would hobble them forever. But you know what?  If you measure carefully, and read carefully, and evaluate your sources carefully, while attending to clarity, precision, causality and coherence, you come up with a decent model.  One that makes decent predictions for the circumstances under test, and points to significant questions.

Did I really believe that rigorous, good quality thinking would lead to hopelessly jumbled conclusions?  Apparently I did, because this realization felt surprising and unfamiliar.  Which surprised the heck out of me.  If I did believe that good quality thinking led to poor-quality conclusions (in other words, conclusions with no predictive or generative power), where exactly did I think good-quality conclusions came from?  Luck?  Delivered by a stork?  I mean, honestly.

If I was my student, I would challenge me to explain my “before-thinking” and “after-thinking.”  Like my students, I find myself unable to explain my “before-thinking.”  The best I can do is to say that my ideas about reasoning were unclear, unexamined, and jumbled up with my ideas about “talent” or “instinct” or something like that.  Yup — because if two people use equally well-reasoned thinking but one draws strong conclusions and the other draws weak conclusions, the difference must be an inherent “ability to be right” that one person has and the other person doesn’t.  *sigh* My inner “voice of Carol Dweck” is giving me some stern formative feedback as we speak.

Jason Buell breaks it down for me in a comment:

Eventually…I expect that they’ll get to the “right” answer. At least at my level, they don’t get into anything subtle enough that a mountain of evidence can be explained equally well by different models.

If they don’t get there eventually, either I haven’t done my job asking the right questions or they’re fitting their evidence to their conclusion.

Last year, I worried a lot that my teaching wasn’t strong enough to pull this off — by which I mean, I worried that my content knowledge of electronics and my process knowledge of ed theory wasn’t strong enough.  And you know?  They’re not — there’s a lot I don’t know.

But for this purpose, that’s not what I need.  I have a more than strong enough grasp of the content to notice when someone is being clear and precise, whether they are using magical thinking or causal thinking, begging the question, or being self-contradictory. And I have the group facilitation skills to keep the conversation focussed on those ideas.  Noticing that helped me sleep a lot better.

A slight difference from Jason’s note above: I don’t expect my students to get to the “right” (i.e. canonical) answer.  The textbook we use teaches the Bohr model, not quantum physics. If they go beyond that, great.  If they don’t, but come up with something that allows them to make predictions within 5-10% of their measurements, draw logical inferences about what’s wrong with a broken machine, and it’s clear, precise, and internally consistent, they’ll be great at their jobs.  And, it turns out, there is a limited number of possible models that satisfy those criteria, most of which have probably been canonical sometime during the last 90 years.  I don’t care if they settle on Bohr or Pauli.  I care that they develop some confidence in reasoning.  I care that they strengthen their thinking enough to justifiably have confidence in their reasoning, specifically.

* For the concept of “confidence in reasoning,” I’m indebted to the Foundation for Critical Thinking, which writes about it as one of their Valuable Intellectual Traits.