Things that dematerialize my patience in the classroom: magical thinking, begging of questions, and conclusions that don’t follow from their premises. The real reason I was foaming at the mouth wasn’t poor quality reasoning; it was that I had no way to respond to it. The language I use to discuss these ideas is so foreign to my students that I didn’t know where to start.
One of the changes I made on purpose this year was defining criteria for good-quality reasoning. I hoped it would help us develop a shared vocabulary for judging the validity of inferences without the need for me to teach symbolic logic. I spent last summer reading Academically Adrift (my review here) about students’ lack of improvement in critical thinking during their post-secondary schooling, and a research paper about approaches to teaching critical thinking (seriously, go read this thing, especially the profiles of actual teachers. It’s fascinating). The former focused on how seldom students are asked to make or break an argument. The latter contained interviews with faculty who conflated good-quality reasoning with self-direction, reflectiveness, or constructed beliefs. In both cases, I recognized myself entirely too much. I was galvanized.
I decided to start with these criteria for good-quality reasoning:
- internal consistency
- connection to our model
- connection to our experience/intuition
- seamless chain of cause and effect
- consideration of other equally valid perspectives
- credible sources
I chose them from a variety of options. The Foundation for Critical Thinking suggests
I toyed with these all summer, checking them against examples I encountered that I thought were particularly well or poorly reasoned. They held up pretty well, but I wanted something more student-friendly, and there were things I wanted to add. I knew that meant I had to remove some, since this would be a lot for students to digest.
First I removed “fairness,” as less important than the others in what is essentially an intro physics course. We’re not talking about ethical dilemmas here (well, OK, we are, but we’re not assessing them). Breadth and depth are important but, again, it’s an intro course: it’s intended to be neither of those. Accuracy was the only standard my students would recognize, but my point was to help them judge the quality of their thinking when they didn’t know the answer. Significance and relevance were almost indistinguishable. I put them in the second tier — things we could get to later.
Clarity, Consistency, Causality
“Logic” was a problem. I knew from previous experience that my students defined “logical” as a nebulous cross between “familiar” and “reinforcing my preconceptions.” I had to stay away from that word. So I broke it down into exactly what I mean by it: internal consistency, coherence with other accepted ideas, and distinguishing cause from correlation.
I kept clarity and precision separate, because I wanted to use them differently. Clarity is for asking “what do you mean exactly?” For example, we might start with “The voltage goes through.” Asking “What do you mean by ‘voltage’ exactly?” and “What do you mean by ‘through’ exactly?” will get us to “the electrons on one side of the resistor have more energy than the electrons on the other side.” Precision is for asking “in what direction” or “how much?” It gets us to “on which side do the electrons have more energy?” or “how much of a difference is there?” (It also motivated some fantastic conversations about the meaning of “significant figures” — post for another day). It was a way to help students remember to ask these questions of themselves.
That yielded 1-4 and 6.
Plausibility Is Not Enough
Then I read a post at Educating Grace about how “making sense of things” sometimes caused us to make myths instead of understanding. Grace asks the kinds of generative questions that makes me feel like I’m growing new synapses even if I can’t answer them, and this was one of them. There’s an excellent example on Learning Museum about fooling ourselves with plausible answers. Number 7 was an attempt to at least consider that, once we have constructed a well-reasoned train of thought, we have to take a moment to check if there are others.
Authority Does Not Equal Credibility
Finally, #8 allowed me to open a conversation about which sources we should use, what it means to “believe” a teacher when they tell you something, what a “fact” is exactly, and lots of other good stuff like the difference between an opinion and a judgement. I knew we’d have to talk about plagiarism and I wanted to motivate the conversation with our judgement of reliability, not an argument about where the commas go in APA style.
One of my favourite conversations started when the students got a bit combative about my stance toward Wikipedia. “You tell us to use Wikipedia, and last year our teachers banned us from using Wikipedia,” they said, as if “teachers” are a single hive-mind that is hypocritical, and not a collection of individuals and institutions that sometimes disagree. I got to practice another new gambit: “Why would a reasonable teacher do that?” I asked. This diffused the combativeness and resulted in students considering a variety of points of view. “Wikipedia could be wrong” came up, of course. I would have to wait a few weeks before we got to “the textbook could be wrong. Or oversimplified. Or begging the question. Or poorly reasoned.”
I put them on notice that I required at least two sources for absolutely everything, including Wikipedia and including the textbook and including things their current or former teachers said. My point is not that people should trust Wikipedia more. It’s that they should trust everything else less. Or that we need to redefine what “trust” means, exactly, in the context of “knowing.”
Wow Mylene – thanks for the great resources and your work on and progress made in exposing your learners to critical thinking and accountability in learning.
I found a piece on accountability in leanring – “Learners must be aware of what they are meant to learn, what they are actually learning and ultimately be able to do what they have learned.” – GLobal Learning Partners
Sounds like you ahve created an environment for this to happen.
Thanks Libby — glad you enjoyed the resources. The dinosaur comic (first link) is one of my all-time faves 🙂
[…] a previous post, I explained the thought process behind seven of my choices of standards for evaluating thinking. […]
Might I suggest a future post? Could you give an example of an assignment and some student responses, and how you give feedback on these criteria? I’d love to see how you put these into practice.
Definitely — it’s in the works. The responses have been really interesting, and the students have taken to it quite well so far.
[…] her post Evaluating Thinking: Why These Criteria?, Mylène describes the criteria she provides her students for what makes a good reasoning […]
[…] else anyone remembered about magnets, then about atoms. I asked questions, introducing the standards for reasoning by asking “What exactly do you mean by … ” or “How much…” or […]
[…] else anyone remembered about magnets, then about atoms. I asked questions, introducing the standards for reasoning by asking “What exactly do you mean by … ” or “How much…” or “How could idea A and […]
[…] in effectiveness that I got from skills-based grading, self and peer assessment, incorporating critical thinking throughout my curriculum, or shifting to inquiry-based modelling. But, I wasn’t asked to […]
[…] How I Chose my Criteria for Critical Thinking […]