You are currently browsing the category archive for the ‘Critical Thinking’ category.

I read a blog post recently about the use of smartphones in the classroom, and it was thought-provoking enough to make me want to flesh out some ideas.  I submitted them as a comment two weeks ago, but they didn’t appear on the blog.  My inquiry about whether the comment was rejected or simply lost in the ether also went unacknowledged, so I thought I’d post it here.

Smartphones Work Well In My Classroom For…

I really appreciate when students take photos of the board, so they can pay attention and join the conversation instead of copying what I’m writing.  A document-scanning app (e.g. CamScanner) can correct parallax and improve contrast, making it look like you own a scanner the size of the whiteboard.

If students are working on team-sized interactive whiteboards, it can also be a great way to capture what they’ve come up with as a group, instead of having to re-copy it into their notebook.

Tablets are extra-useful for this since the larger screen makes it easier to read and annotate the photos — especially useful are EzPDF and Freenote, although obviously cross-platform support can be an issue.

I also like having students take videos of themselves solving problems or demonstrating experiments — a big help when I don’t have time to see each person or group “live.”  Plus, hearing their voices as they describe their thinking gives me a better feel for what they’ve understood vs. what they’ve memorized.

Digital Natives?

The interesting thing is that many of my students, contrary to the received wisdom about digital natives, are surprisingly reticent about this.  It takes a significant amount of direct instruction for students to try these approaches, even when it seems to me that it would be a huge time-saver.  If I give an online and a conventional option for an assignment, the students overwhelmingly choose the conventional route (using a paper notebook instead of a blog so that their essay research is searchable… or submitting written assignments instead of screencasts… or typing instead of using speech to text for dictating papers, for example — even Windows 7 has native support that is reasonably good).

My students, for various reasons, don’t have much time for adjusting or troubleshooting their devices (figuring out where the camera stores its pictures so that the pics can be attached to an email, for example) and often do not understand that folders are hierarchical.

But I Can Drive Without Understanding Engines, Right?

The good news is, teachers who fear that their students far outpace them in skill probably have less to fear than they think.  The bad news is, I suspect that we (including the students) tend to overestimate the degree to which using technology (as opposed to understanding it, or directing it) is inherently useful.

It’s a bit like knowing how to drive a car but not understanding that pressing on the accelerator is what uses up gas and increases the braking distance.  You can make the car go fast, but you probably can’t figure out whether going fast is a good idea at the moment.  Maybe you follow the speed limit diligently without being able to judge whether it’s prudent under the conditions; maybe you don’t follow the speed limit because you don’t know of any reason for its importance.  Besides being dangerous, both approaches are unthinking — abdicating responsibility to either the rule-makers or other drivers.

Making Vs. Using

One approach that seems to be having a lot of success is systematically teaching students to become makers and fixers of classroom technology instead of users/consumers.  I’m also excited about making programming accessible to kids.  Besides improving conceptual understanding and critical thinking, this approach can help us broach the idea that it’s not good enough to be a “native” of a society in which someone else holds the reins of power.  My question to them is not whether they are “digital natives” but whether they are “digital serfs.”  In other words, time to start paying attention to who are the programmers, and who are the programmed.

"Celebrating Peasants"

I went looking for a resource about “growth mindset” that I could use in class, because I am trying to convince my students that asking questions helps you get smarter (i.e. understand things better).  I appreciate Carol Dweck‘s work on her website and her book, but I don’t find them

  • concise enough,
  • clear enough, or
  • at an appropriate reading level for my students.

What I found was Diana Hestwood and Linda Russel’s presentation about “How Your Brain Learns and Remembers.”  The authors give permission for non-profit use by individual teachers.  It’s not perfect (I edited out the heading that says “You are naturally smart” … apologies to the authors) and it’s not completely in tune with some  of the neuroscience research I am hearing about lately, but it meets my criteria (above) and got the students thinking and talking.

Despite her warning that it’s not intended to stand on its own and that the teacher should lead a discussion, I’d rather poke my eyes out than stand in front of the group while reading full paragraphs off of slides. I found the full-sentence, full-paragraph “presentation” to work on its own just fine (CLARIFIED: I removed all the slides with yellow backgrounds, and ended at slide 48).  I printed it, gave it to the students, and asked them to turn in their responses to the questions embedded in it.  I’ll report back to them with some conversational feedback on their individual papers and some class time for people to raise their issues and questions — as usual, discussion after the students have tangled with the ideas a bit.

The students really went for it.  They turned in answers that were in their own words (a tough ask for this group) and full of inferences, as well as some personal revelations about their own (good and bad) learning experiences.  There were few questions (the presentation isn’t exactly intended to elicit them) but lots of positive buzz.  About half the class stayed late, into coffee break, so they could keep writing about their opinions of this way of thinking.  Several told me that “this was actually interesting!”  (*laugh*)  I also got one “I’m going to show this to my girlfriend” and one, not-quite-accusatory but clearly upset “I wish someone had told me this a long time ago.”  (*gulp*)

I found a lot to like in this presentation.  It’s a non-threatening presentation of some material that could easily become heavily technical and intimidating.  It’s short, and it’s got some humour.  It’s got TONS of points of comparison for circuits, electronic signal theory, even semiconductors (not a co-incidence, obviously).  Most importantly, it allows students to quickly develop causal thinking (e.g. practice causes synapses to widen).

Last year I found out in February that my students couldn’t consistently distinguish between a cause and a definition, and trying to promote that distinction while they were overloaded with circuit theory was just too much.  So this year I created a unit called “Thinking Like a Technician,” in which I introduced the thinking skills we would use in the context of everyday examples. Here’s the skill sheet — use the “full screen” button for a bigger and/or downloadable version.

It helped a bit, but meant that we spend a couple of weeks talking about roller coasters, cars, and musical instruments.  Next year, this is what we’ll use instead.  It’ll give us some shared vocabulary for talking about learning and improving — including why things that feel “easy” don’t always help, why things that feel “confusing” don’t mean you’re stupid, why “feeling” like you know it isn’t a good test of whether you can do it, and why I don’t accept “reviewing your notes” as one of the things you did to improve when you applied for reassessment.

But this will also give us a rich example of what a “model” is, why they are necessarily incomplete and at least a bit abstracted, and how they can help us make judgement calls.  Last year, I started talking about the “human brain model” around this time of the year (during a discussion of why “I’ll just remember the due date for that assignment” is not a strong inference).  That was the earliest I felt I could use the word “model” and have them know what I meant — they were familiar enough with the “circuits and electrons model” to understand what a model was and what it was for.  Next year I hope to use this tool to do it the other way around.

The first-year students are shocked that we accept all these ideas about electrons just because the sources support each other, even though no one’s seen an electron, and even scientists aren’t completely sure what’s going on. They’ve been asking a lot of questions about “how can we ever be sure of anything?”  We’ve talked a lot about the difference between accepting an idea based on evidence and believing it on faith, how to judge the quality of sources, etc.  They’ve been practicing asking clarifying questions, summarizing each others’ ideas, and identifying cause and effect.  In that vein, a student came into my office the other day to tell me this interesting story…

I lead an alliance of players in [online game] and the other day I couldn’t log in.  I checked all the computers at school too, and they did the same thing.  So then I called tech support for [internet carrier], they said it’s not them.  So I asked, “Well, how is it not you??”  They eventually said that GoDaddy hosts [game server], and GoDaddy’s servers were down.  So then I tried to call GoDaddy, because I want to post something on facebook but not until I checked my sources.  And I was like, ‘it’s just like school, whoa.’  I tried to explain it to my boyfriend but he said ‘I think you’re @#$%ed.’

She laughed in delight.

Last February, I had a conversation with my first-year students that changed me.

On quizzes, I had been asking questions about what physically caused this or that.  The responses had a weird random quality that I couldn’t figure out.  On a hunch, I drew a four-column table on the board, like this:

Topic: Voltage






I gave the students 15 minutes to write whatever they could think of.

I collect the answer for “cause” a write them all down.  Nine out of ten students said that a difference of electrical energy levels causes voltage.  This is roughly like saying that car crashes are caused by automobiles colliding.

Me: Hm.  Folks, that’s what I would consider a “definition.”  Voltage is just a fancy word that means “difference of electrical energy levels” — it’s like saying the same thing twice.  Since they’re the same idea, one can’t cause the other — it’s like saying that voltage causes itself.

Student: so what causes voltage — is it current times resistance?

Me: No, formulas don’t cause things to happen.  They might tell you some information about cause, and they might not, depending on the formula, but think about it this way.  Before Mr. Ohm developed that formula, did voltage not exist?  Clearly, nature doesn’t wait around for someone to invent the formula.  Things in nature somehow happen whether we calculate them or not.  One thing that can cause voltage is the chemical reaction inside a battery.

Other student: Oh! So, that means voltage causes current!

Me: Yes, that’s an example of a physical cause. [Trying not to hyperventilate.  Remember, it’s FEBRUARY.  We theoretically learned this in September.]

Me: So, who thinks they were able to write a definition?

Students: [explode is a storm of expostulation.  Excerpts include] “Are you kidding?” “That’s impossible.” “I’d have to write a book!”  “That would take forever!”

Me: [mouth agape]  What do you mean?  Definitions are short little things, like in dictionaries. [Grim realization dawns.]  You use dictionaries, right?

Students: [some shake heads, some just give blank looks]

Me: Oh god.  Ok.  Um.  Why do you say it would take forever?

Student: How could I write everything about voltage?  I’d have to write for years.

Me: Oh.  Ok.  A definition isn’t a complete story of everything humanly known about a topic.  A definition is… Oh jeez.  Now I have to define definition. [racking brain, settling on “necessary and sufficient condition,” now needing to find a way to say that without using those words.]  Ok, let’s work with this for now: A definition is when you can say, “Voltage means ________; Whenever you have ___________, that means you have voltage.”

Students: [furrowed brows, looking amazed]

Me: So, let’s test that idea from earlier.  Does voltage mean a difference in electrical energy levels? [Students nod]  Ok, whenever you have a difference in electrical energy levels, does that mean there is voltage? [Students nod] Ok, then that’s a definition.

Third student: So, you flop it back on itself and see if it’s still true?

Me: Yep. [“Flopping it back on itself” is still what I call this process in class.] By the way, the giant pile of things you know about voltage, that could maybe go in the “characteristics” column.  That column could go on for a very long time.  But cause and definition should be really short, probably a sentence.

Students: [Silent, looking stunned]

Me: I think that’s enough for today.  I need to go get drunk.

Ok, I didn’t say that last part.

When I realized that my students had lumped a bunch of not-very-compatible things together under “cause,” other things started to make sense.  I’ve often had strange conversations with students about troubleshooting — lots of frustration and misunderstanding on both sides.  The fundamental question of troubleshooting is “what could cause that,” so if their concept of cause is fuzzy, the process must seem magical.

I also realized that my students did not consistently distinguish between “what made you think that” and “what made that happen.”  Both are questions about cause — one about the cause of our thinking or conclusions, and one about the physical cause of phenomena.

Finally, it made me think about the times when I hear people talk as though things have emotions and free will — especially high-tech products like computers are accused of “having a bad day” or “refusing to work.”  Obviously people say things like that as a joke, but it’s got me thinking, how often do my students act as though they actually think that inanimate objects make choices?  I need a name for this — it’s not magical thinking because my students are not acting as though “holding your tongue the right way” causes voltage.  They are, instead, acting as though voltage causes itself.  It seems like an ill-considered or unconscious kind of animism. I don’t want to insult thoughtful and intentional animistic traditions by lumping them in together, but I don’t know what else to call it.

Needless to say, this year I explicitly taught the class what I meant by “physical cause” at the beginning of the year.  I added a metacognition unit to the DC Circuits course called “Technical Thinking” (a close relative of the “technical reading” I proposed over a year ago, which I gradually realized I wanted students to do whether they were reading, listening, watching, or brushing their teeth).  Coming soon.

How I got my students to read the text before class: have them do their reading during class.

Then, the next day, I can lead a discussion among a group of people who have all tangled with the text.

It’s not transformative educational design, but it’s an improvement, with these advantages:

  1. It dramatically reduces the amount of time I spend lecturing (a.k.a. reading the students the textbook), so there’s no net gain or loss of class time.
  2. The students are filling in the standard comprehension constructor that I use for everything — assessing the author’s reasoning on a rubric.  That means they know exactly what sense-making I am asking them to engage in, and what the purpose of their reading is.
  3. When they finish reading, they hand in the assessments to me, I read them, and prepare to answer their questions for next class.  That means I’m answering the exact questions they’re wondering about — not the questions they’ve already figured out or haven’t noticed yet.
  4. Knowing that I will address their questions provides an incentive to actually ask them.  It’s not good enough to care what they think if I don’t put it into action in a way that’s actually convincing to my audience.
  5. Even in a classroom of 20 people, each person gets an individualized pace.
  6. I am free to walk around answering questions, questioning answers, and supporting those who are struggling.
  7. We’re using a remarkable technology that allows students to think at their own pace, pause as often/long as they like, rewind and repeat something as many times as they like, and (unlike videos or podcasts) remains intelligible even when skipping forward or going in slow-mo.  This amazing technology even detects when your eyes stray from it, and immediately stops sending words to your brain until your attention returns.  Its battery life is beyond compare, it boots instantly, weights less than an iPod nano, can be easily annotated (even supports multi-touch), and with the right software, can be converted from visual to auditory mode…

It’s a little bit JITT and a little bit “flipped-classroom” but without the “outside of class” part.

I often give a combination of reading materials: the original textbook source, maybe another tertiary source for comparison — e.g. a Wikipedia excerpt, then my summary and interpretation of the sources, and the inferences that I think follow from the sources.  It’s pretty similar to what I would say if I was lecturing.  I write the summaries in an informal tone intended to start a conversation.  Here’s an example:

And here’s the kind of feedback my students write to me (you’ll see my comments back to them in there too).


Highlights of student feedback:

Noticing connections to earlier learning

When I read about finite bandwidth, it seemed like something I should have already noticed — that amps have a limit to their bandwidth and it’s not infinite


When vout tries to drop, less opposing voltage is fed back to the inverting input, therefore v2 increases and compensates for the decrease in Avol

Noticing confusion or contradiction

What do f2(OL) and Av(OL) stand for?

I’m still not sure what slew-induced distortion is.

I don’t know how to make sense of the f2 = funity/Av(CL).  Is f2 the bandwidth?

In [other instructor]’s course, we built an audio monitor, and we used an op amp.  We used a somewhat low frequency (1 KHz), and we still got a gain of 22.2  If I use the equation, the bandwidth would be 45Hz?  Does this mean I can only go from 955 Hz to 1045 Hz to get a gain of 22.2?

Asking for greater precision

What is the capacitance of the internal capacitor?

Is this a “flipped classroom”?

One point that stuck with me about many “flipped classroom” conversations is designing the process so that student do the low-cognitive-load activities when they’re home or alone (watching videos, listening to podcasts) and the high-cognitive-load activities when they’re in class, surrounded by supportive peers and an experienced instructor.

This seems like a logical argument.  The trouble is that reading technical material is a high-cognitive-load activity for most of my students.  Listening to technical material is just as high-demand… with the disadvantage that if I speak it, it will be at the wrong pace for probably everyone.  The feedback above is a giant improvement over the results I got two years ago, when second year students who read the textbook would claim to be “confused” by “all of it,” or at best would pick out from the text a few bits of trivia while ignoring the most significant ideas.

The conclusion follows: have them read it in class, where I can support them.

Today we were brainstorming ideas about electricity, practicing clarifying, and creating questions that start “What causes…”.  Some students are anxious about this, seem to fear that if they ask those questions, they will have to answer them.  My goal is for us to draw the boundaries of what we do and don’t know — not to get lost in some metaphysical endless loop.  Facing the giant pile of what we don’t know is hard sometimes.

I say “If it seems like we’ll never run out of questions, don’t worry.  We don’t have to answer those questions — we’re just keeping track of what we have and haven’t answered.  And anyway, if we ran out of questions, wouldn’t that be awful and boring?”

The answer from the back of the room is, “No, that would be great.  And then I’d be smart.”

What’s my next move?

In my ongoing struggle to help my students make sense of their own mistakes, I sometimes hear them say that the reason they misapplied a skill is that they were “overthinking.”  I’ve always had a hard time responding to this.  I’m not even sure I know exactly what they mean by it, and when I try to have the conversation, I get the impression that there are so many hidden assumptions that we’re not communicating well.

I want them to focus on the quality of their thinking, not the amount, so I find the conversation frustrating.  If I were to try to put myself in their place, here are some possible translations:

  • Is their meaning of “overthinking” similar to my meaning of “close reading”?
  • “Over”-thinking… too much thinking?
  • Thinking carefully might bring up new possibilities that you can neither support nor contradict.  If we’re in class when it happens, it probably causes perplexity.  If the student is in a test when it happens, their inability to either test the new possibilities or ask questions about them is probably really frustrating — a frustration that they blame on the thinking itself
  • Thinking carefully (or a lot?) makes you start noticing complexity and nuance.  If you are noticing them for the first time, they may distract your mind away from the things you used to think about, making a familiar landscape seem unfamiliar.
  • Is this related to the level of abstraction?  If students are used to reasoning within an abstraction that they accepted but did not build (in other words, they did not choose to simplify or remove information — the model was given to them that way), then thinking closely might cause them to notice one of the other “rungs” of the abstraction ladder, which could change the pattern of their reasoning.

I’m going to try to pay closer attention to this in the coming year.  In the meantime, it came up in class today and I was finally pleased with how I responded.

We had just finished doing the bicycle experiment inspired by Rebecca Lawson’s research.  Students look at  stick-drawing bicycles and have to pick the one that most resembles an actual bike.  Lots of people were surprised at how difficult it was.  One brave students shared “I don’t know why, but I thought the chain ran from wheel to wheel.”  We talked a bit about how easy it is to feel familiar with things, and genuinely know a lot about them, while not noticing what we don’t know.  I then moved on to the next topic — the importance of double-checking what we read, hear, and remember.

I was talking about how memory can be misleading.  I used the example of the feeling you have when you walk into a test feeling confident, then sit down and realize you can’t solve the question.   The same student fell back on what seemed to be a tried-and-true way of thinking, commenting, “Isn’t it true that a lot of times you overthink things, and you should just stick with your first instinct?”

My reply was to ask gently, “How did it work out with the bicycle?”

I went on to say that what I expected from them was not more thinking nor less thinking, but technician thinking.  Too much food can make you sick and so can too little.  The wrong kind of food for your situation can also be bad.  Similarly, our goal is not certain quantity of thought, but a certain kind — particular habits of mind based on particular specialized skills and ideas.  We’ll see how this supports our conversations in the future.

How do you learn to use a new piece of software (or web service or smart phone)?  I notice that some people press all the buttons, others prefer step-by-step instructions in the form of “press this button, then press that button.”  Some want to watch an experienced user, then experiment on their own (and I’m sure there are lots of other in-between approaches). 

I got to thinking about this because my partner (who is quite uneasy about computers) was trying to email me an address from an electronic address book, but wrote it out on paper then typed it in.  When I suggested copying and pasting, the response was “I don’t know how to copy in this program.”  It’s an interesting point.  Not everyone knows that there are software conventions determined by the operating system.  But in the absence of that knowledge, I think some users would try the “copy” routine that worked for them in other programs, just to see if it worked.  Others would not trust themselves to try something in which they haven’t been directly instructed.

Does anxiety about new technology cause people to not experiment?  Or does the lack of habit/experience with experimenting cause the anxiety?  Or both?

I discussed this with a friend over dinner.  She was describing her attempts to encourage broader use of the electronic media available at her workplace, and is definitely a “press all the buttons” kind of user.  She is not a tech professional, and she is not 22, so the stereotypical answers are clearly inadequate.  I asked her where she learned to engage with unfamiliar technology that way.  Her answer was, “from my long-standing distrust of humans.”  We shared a laugh, but it gradually seemed less funny. 

I don’t think she doesn’t trust people to be honest.  I take it to mean that she doesn’t trust people to be right.  At least not all the time, and not comprehensively.  It connects to a very interesting exchange that happened at Casting Out Nines and Gas Station Without Pumps.  Does trusting our teachers make it easier to learn?  Or harder?

I think a better question is “trust them to do what?”

When I am learning from someone (I include the authors of books), I need to trust that they will respect me.  I also need to trust that they are qualified and experienced with the material. For the sake of my learning, I also need to not trust them to be right.  It’s possible there’s a typo or that the teacher misspoke (or truly misunderstands).  It’s much more possible that what I understood is not what the author/teacher meant.  If I “trust” my teacher to “tell me the truth,” what I am really trusting is my own perception of what they meant — which is highly fallible even if the source material is accurate.  Besides the problem of miscommunication, there’s a deeper problem: trusting a source to be right means reasoning from authority — and that’s faith, not science.  If students are engaged in an un-scientific reasoning process, it undermines whatever scientific content we are reasoning about.

Where it falls apart in my classroom: it’s hard for my students to distinguish between not assuming their teachers are right, vs assuming their teachers are wrong. 

Homework: figure out how to convince students that they shouldn’t trust me to be right even though a lot of their schooling tells them that’s blasphemous; also, convince students that they should trust me to respect them even though a lot of their schooling tells them I won’t.

My inquiry-based experiments forced me to face something I hadn’t considered: my lack of confidence in the power of reasoning.*  I spent a lot of time worrying that my students would build some elaborately misconstrued model that would hobble them forever. But you know what?  If you measure carefully, and read carefully, and evaluate your sources carefully, while attending to clarity, precision, causality and coherence, you come up with a decent model.  One that makes decent predictions for the circumstances under test, and points to significant questions.

Did I really believe that rigorous, good quality thinking would lead to hopelessly jumbled conclusions?  Apparently I did, because this realization felt surprising and unfamiliar.  Which surprised the heck out of me.  If I did believe that good quality thinking led to poor-quality conclusions (in other words, conclusions with no predictive or generative power), where exactly did I think good-quality conclusions came from?  Luck?  Delivered by a stork?  I mean, honestly.

If I was my student, I would challenge me to explain my “before-thinking” and “after-thinking.”  Like my students, I find myself unable to explain my “before-thinking.”  The best I can do is to say that my ideas about reasoning were unclear, unexamined, and jumbled up with my ideas about “talent” or “instinct” or something like that.  Yup — because if two people use equally well-reasoned thinking but one draws strong conclusions and the other draws weak conclusions, the difference must be an inherent “ability to be right” that one person has and the other person doesn’t.  *sigh* My inner “voice of Carol Dweck” is giving me some stern formative feedback as we speak.

Jason Buell breaks it down for me in a comment:

Eventually…I expect that they’ll get to the “right” answer. At least at my level, they don’t get into anything subtle enough that a mountain of evidence can be explained equally well by different models.

If they don’t get there eventually, either I haven’t done my job asking the right questions or they’re fitting their evidence to their conclusion.

Last year, I worried a lot that my teaching wasn’t strong enough to pull this off — by which I mean, I worried that my content knowledge of electronics and my process knowledge of ed theory wasn’t strong enough.  And you know?  They’re not — there’s a lot I don’t know.

But for this purpose, that’s not what I need.  I have a more than strong enough grasp of the content to notice when someone is being clear and precise, whether they are using magical thinking or causal thinking, begging the question, or being self-contradictory. And I have the group facilitation skills to keep the conversation focussed on those ideas.  Noticing that helped me sleep a lot better.

A slight difference from Jason’s note above: I don’t expect my students to get to the “right” (i.e. canonical) answer.  The textbook we use teaches the Bohr model, not quantum physics. If they go beyond that, great.  If they don’t, but come up with something that allows them to make predictions within 5-10% of their measurements, draw logical inferences about what’s wrong with a broken machine, and it’s clear, precise, and internally consistent, they’ll be great at their jobs.  And, it turns out, there is a limited number of possible models that satisfy those criteria, most of which have probably been canonical sometime during the last 90 years.  I don’t care if they settle on Bohr or Pauli.  I care that they develop some confidence in reasoning.  I care that they strengthen their thinking enough to justifiably have confidence in their reasoning, specifically.

* For the concept of “confidence in reasoning,” I’m indebted to the Foundation for Critical Thinking, which writes about it as one of their Valuable Intellectual Traits.

Here’s a snapshot of our model as it existed around March — I have removed all student names for privacy, but I would normally track who proposed or asked what, and keep notes about the context in which questions arose.

We gathered evidence from our own measurements and from published sources.  The students proposed most of the additions to the model, but occasionally I proposed one. I’ll write in detail about each method next — this post covers some basic ideas common to all approaches. Additions to our shared mental model of electricity tended to be short, simple conceptual statements, like “An electron cannot gain or lose charge” or “Between the nucleus and the electrons, there is no ‘stuff.'”

Ground Rules

Here are the ground rules I settled on (though introduced gradually, and not phrased like this to the students).  I was flying by the seat of my pants here, so feel free to weigh in.

  1. Every student must contribute ideas to the model.
  2. For an idea to get added to the model, it must get consensus from the class.
  3. Deciding to accept, reject, or change a proposal must be justified according to its clarity, precision, causality, and coherence (not “whether you like the idea/presenter”).
  4. Each student is responsible for maintaining their own record of the model.
  5. Students may bring their copy of the model to any quiz.
  6. Quizzes will assess whether students can support their answers using the model.

1.  What if the class rejects someone’s proposal?

Adding an idea to the model is a skill on the skill sheet.  As with any other skill, students get lots of feedback, then they improve.  They have until the end of the semester.  I don’t allow them to move to the “class discussion” stage until I’m satisfied that they have a well-reasoned attempt that addresses previous feedback with a solid chance of getting something accepted. Unlike other skills, though, they’re getting individual feedback sheets from each of their classmates, not just me. Most people wrote two drafts.  No one wrote more than three.  More on this soon.

2. Consensus from the class — are you kidding?

I had a small class this year (12 students at the high-water mark) and I’m not sure how it will work with 20-24 next year — I’m thinking hard about how to scale this.  But yes, it really worked.  We used red-green-yellow flashcards to vote quickly.  I did not vote.  My role was to

  • Teach them how to have a consensus-focussed conversation
  • Point out supporting or conflicting ideas in the model, if the students didn’t — in other words, to “keep the debate alive [if] I’m not convinced they have compelling arguments on either side”
  • Do some group-dynamics facilitation, such as recording new questions that come up (for later exploration) as a result of the discussion, making sure everyone gets a chance to speak and is not interrupted, and sometimes doing some graphical recording on the whiteboard if the presenting students are too busy answering questions to do their own drawing.
  • Determine when the group is ready to move to a vote or needs to table the conversation

That made me a cross between the secretary and the speaker of the house.  Besides being productive, it was fun.  A note about group dynamics facilitation: I’m often frustrated by the edu-fad of re-labelling teachers “facilitators.”  I’ve done a lot of group dynamics facilitation for community groups.  It is a different role than teaching, and it’s disingenuous to pretend that students need only facilitation, not teaching.  However, in this situation, facilitation was called for.  The group had a goal (to accept or reject an idea for the model) and they had a process (evaluate clarity, precision, etc. etc.); my job was to attend to the process so they could attend to the goal.

3. What about people blocking consensus out of personal dislike, and other sabotage maneuvers?

There is a significant motivation for students to take part in good faith.  First off, no one wants these conversations to go on forever: they’re mentally challenging and no one has infinite stamina.  Second, if a well-supported, generative idea is left out of the model, no one will be able to use it on the next quiz.  Third, if the class accepts a poorly supported idea, it will cause a huge headache down the road when ideas start to contradict; we’ll have to backtrack, search out the flaw in our reasoning, and then uproot all the later ideas that the class accepted based on the poorly reasoned idea.  They were darned careful what they allowed into their model.

Other than that, I used my power as the facilitator.  Conflict happens in any group, and the usual techniques apply.  It was almost always possible to resolve the conflict fairly quickly: modifying the wording, adding a caveat.  We’d vote again, the idea would get accepted, and we’d move on. Sometimes a student felt strongly enough about their objection to propose an experiment that we should try; unless the group could come up with a reason why we shouldn’t do that experiment, we’d just put the idea on hold, and I’d add that experiment to the list for that week.

Once, a student voted against a proposal but was unable to explain what they needed made clearer, more precise, more causally linked, etc.  It just “didn’t sound right.”  None of my usual techniques worked to draw out of that student what they needed to be confident that the proposal was well-reasoned, or just to feel heard.  So I reminded them that the model is always modifiable, that we can remove ideas as well as add them, and that we have committed to base our decisions on the best-quality judgement we can make, not “truth” or “feeling.”  I told them that I would consider the idea accepted for the purposes of using it on quizzes, but record the disagreement, and that if/when new information came up, we would revisit it.

An important point about facilitation: these conversations were sometimes fun and lighthearted, but sometimes gruelling for the students, especially as we moved into more abstract and unintuitive material.  The most important mistake I made was letting the conversation drag.  To remedy this, I used what I consider fairly aggressive facilitation — quickly redirecting rambling speakers, proposing time limits, summarizing and restating sooner than I otherwise might, etc.  If the conversation was so unclear that students weren’t able to even give feedback about how to improve, I had to diagnose that as soon as possible.  I would say something like “It looks like we’re not ready to move forward here.  Joe, let’s meet after class and see if we can figure out how to strengthen your proposal.”

4.  Students maintain their own record of the model


  • Students had a reason to take decent notes, at least about certain key ideas.
  • The list got really, really, long , and included a lot of topics, many of linked to each other.  It was a powerful visual indicator of just how huge of an endeavour this really is.
  • Most ideas fit clearly under more than one category.  It was up to the students to choose.  It was a good reminder that the new ideas aren’t divorced from the old ideas, that all the chapters in the textbook really do connect.

5. Every test is “open notes”?

In a way, yes.  I felt a bit weird about this — there are some things they really need memorized.  But it worked out really well.  I allow students to bring their copy of the model to any quiz (no other notes).  It must contain only the ideas voted on and accepted by the class — no worked problems, etc.  I circulate during the quiz and randomly choose some paper copies to take a close look at.   It was a non-issue.

About halfway through the semester, students gradually stopped bringing it.  We used that thing so much, for so many purposes, that they mostly didn’t need it.  Besides, like any quiz, if you have to look everything up, you won’t have time to finish.  Then you’ll have to apply for reassessment, which means doing some practice problems, which means building speed and automaticity, which means needing the notes less.

6.  “You’re going to grade us based on what we say?!”

I modified many quiz questions so that they said things like

Explain, using a chain of cause and effect, one possible reason for the difference in energy levels on either side of S1.  It doesn’t have to be “right” – but it must be backed up by the model and not have any gaps.

This worked extremely well.  It helped students enter the tentative space between “I get it” and “I don’t get it,” saying things like “Based on our model, it is possible that…”.  It gave me the opportunity to show a variety of well-reasoned answers (I sometimes used excerpts from student answers to open conversation in a later class).  It helped me banish my arch nemesis: the dreaded “But You Said.”  (Because I didn’t say.  They did.)