You are currently browsing the category archive for the ‘Things I’ve improved’ category.

I wrote recently about creating a rubric to help students analyze their mistakes.  Here are some examples of what students wrote — a big improvement over “I get it now” and “It was just a stupid mistake.”

The challenge now will be helping them get in the habit of doing this consistently.  I’m thinking of requiring this on reassessment applications.  The downside would be a lot more applications being returned for a second draft, since most students don’t seem able to do this kind of analysis in a single draft.

Understand What’s Strong

  • “I thought it was a parallel circuit, and my answer would have been right if that was true.”

  • “I got this question wrong but I used the idea from the model that more resistance causes less current and less current causes less power to be dissipated by the light bulbs.”

  • “The process of elimination was a good choice to eliminate circuits that didn’t work.”

  • “A good thing about my answer is that I was thinking if the circuit was in series, the current would be the same throughout the circuit.”

 

Diagnose What’s Wrong

  • “The line between two components makes this circuit look like a parallel circuit.”

  • “What I don’t know is, why don’t electrons take the shorter way to the most positive side of the circuit?”

  • “I made the mistake that removing parallel branches would increase the remaining branches’ voltage.”

  • “What I didn’t realize was that in circuit 2, C is the only element in the circuit so the voltage across the light bulb will be the battery voltage, just like light bulb A.”

  • “I looked at the current in the circuit as if the resistor would decrease the current from that point on.”

  • “I think I was thinking of the A bulb as being able to move along the wire and then it would be in parallel too.”

  • “What I missed was that this circuit is a series-parallel with the B bulb in parallel with a wire, effectively shorting it out.”

  • “What I did not realize at first about Circuit C was that it was a complete circuit because the base of the light bulb is in fact metal.”

  • “I thought there would need to be a wire from the centre of the bulb to be a complete circuit.”

  • “I wasn’t recognizing that in Branch 2, each electron only goes through one resistor or the other.  In Branch 1, electrons must flow through each resistor.”

  • “I was comparing the resistance of the wire and not realizing the amount of distance electrons flowed doesn’t matter because wire has such low resistance either way.”

  • “My problem was I wasn’t seeing myself as the electrons passing through the circuit from negative to positive.”

 

Improve

  • “In this circuit, lightbulb B is shorted so now all the voltage is across light bulb A.”

  • “When there is an increase in resistance, and as long as the voltage stays constant, the current flowing through the entire circuit decreases.”

  • “After looking into the answer, I can see that the electrons can make their way from the bottom of the battery to the middle of the bulb, then through the filament, and back to the battery, because of metal conducting electrons.”

  • “To improve my answer, I could explain why they are in parallel, and also why the other circuits are not parallel.”

  • “I can generalize this by saying in series circuits, the current will stay the same, but in parallel circuits, the current may differ.”

  • “From our model, less resistance causes more current to flow.  This is a general idea that will work for all circuits.”

This year I’ve really struggled to get conversation going in class.  I needed some new ways to kick-start the questioning, counter-example-ing, restating, and exploring implications that fuel inquiry-based science.  I suspected students were silent because they were afraid that their peers and/or I would find out what they didn’t know.  I needed a more anonymous way for them to ask questions and offer up ideas.

About that time, I read Mark Guzdial’s post about Peer Instruction in Computer Science.  While exploring the resources he recommends, I found this compelling and very short PI teacher cheat sheet. I was already curious because Andy Rundquist and Joss Ives were blogging about interesting ways to use PI, even with small groups.  I hadn’t looked into it because, until this year, I’ve never been so unsuccessful in fostering discussion.

The cheat-sheet’s clarity and my desperation to increase in-class participation made me think about it differently.  I realized I could adapt some of the techniques, and it worked — I’ve had a several-hundred-percent increase in students asking questions, proposing ideas, and taking part in scientific discourse among themselves.    Caveat: what I’m doing does not follow the research model proposed by PI’s proponents.  It just steals some of their most-easily adopted ideas.

What is Peer Instruction (PI)?

If you’re not familiar with it, the basic idea is that students get the “lecture” before class (via readings, screencasts, etc), then spend class time voting on questions, discussing in small groups, and voting again as their understanding changes.  Wikipedia has a reasonably clear and concise entry on PI, explaining the relationship between Peer Instruction, the “flipped classroom”, and Just-In-Time Teaching.

Why It’s Not Exactly PI

My home-made voting flashcards

My home-made voting flashcards

  • I don’t have clickers, and don’t have any desire for them.  If needed, I use home-made voting cards instead.  Andy explains how effective that can be.
  • I prefer to use open-ended problems, sometimes even problems the students can’t solve with their current knowledge, rather than multiple-choice questions.  That’s partly because I don’t have time to craft good-quality MC items, partly because I want to make full use of the freedom I have to follow students’ noses about what questions and potential answers are worth investigating.
  • Update (Feb 19): I almost forgot to mention, my classroom is not flipped.  In other words, I don’t rely on before-class readings, screencasts, etc.

What About It is PI-Like?

  1. I start with a question for students to tackle individually.  Instead of multiple-choice, it could be a circuit to analyze, or I might ask them to propose a possible cause for a phenomenon we’ve observed.
  2. I give a limited amount of time for this (maybe 2-3 minutes), and will cut it even shorter if 80% of students finish before the maximum time.
  3. I monitor the answers students come up with individually.  Sometimes I ask for a vote using the flashcards.  Other times I just circulate and look at their papers.
  4. I don’t discuss the answers at that point.  I give them a consistent prompt: “In a moment, not right now but in a moment, you’re going to discuss in groups of 4.  Come to agreement on whatever you can, and formulate questions about whatever you can’t agree on.  You have X minutes.  Go.”
  5. I circulate and listen to conversations, so I can prepare for the kinds of group discussion, direct instruction, or extension questions that might be helpful.
  6. When we’re 30 seconds from the end, or when the conversation starts to die down, I announce “30 more seconds to agree or come up with questions.”
  7. Then, I ask each group to report back.  Usually I collect all the questions first, so that Group B doesn’t feel silenced if their question is answered by Group A’s consensus. Occasionally I ask for a flashcard vote at this point; more often, collect answers from each group verbally. I write them on the board — roughly fulfilling the function of “showing the graph” of the clicker results.
  8. If the answers are consistent across the group and nothing needs to be clarified, I might move on to an extension question.  If something does need clarification, I might do some direct instruction.  Either way, I encourage students to engage with the whole group at this point.

Then we’re ready to move on — maybe with another round, maybe with an extension question (the cheat-sheet gives some good multi-purpose prompts, like “What question would make Alternate Answer correct?”).  I’m also a fan of “why would a reasonable person give Alternate Answer?”

Why I Like It

It doesn’t require a ton of preparation.  I usually plan the questions I’ll use (sometimes based on their pre-class reading which, in my world, actually in-class reading…).  But, anytime during class that I feel like throwing a question out to the group, I can do this off the cuff if I need to.

During the group discussion phase (Step 4), questions and ideas start flowing and scientific discourse flourishes.  Right in this moment, they’re dying to know what their neighbour got, and enjoy trying to convince each other.  I don’t think I buy the idea that these techniques help because students learn better from each other — frankly, they’re at least as likely to pseudoteach each other as I am.  I suspect that the benefit comes not so much from what they hear from others but from what they formulate for themselves.   I wish students felt comfortable calling that stuff out in a whole group discussion (with 17 of us in the room, it can be done), but they don’t.  So.  I go with what works.

No one outside the small group has to know who asked which questions.  The complete anonymity of clickers isn’t preserved, but that doesn’t seem to be a problem so far.

Notes For Improvement

There are some prompts on the cheat sheet that I could be using a lot more often — especially replacing “What questions do you have” or “What did you agree on” with “What did you group talk about,” or “If your group changed its mind, what did you discuss?”

There’s also a helpful “Things Not To Do (that seemed like a good idea at the time)” page that includes my favourite blooper — continuing to talk about the problem after I’ve posed the question.

If I was to add something to the “What Not To Do” list, it would be “Shifting/pacing while asking the question and immediately afterwards.”  I really need to practice holding still while giving students a task, and then continuing to hold still until they start the task.   My pacing distracts them and slows down how quickly they shift attention to their task; and if I start wandering the room immediately, it creates the impression that they don’t have to start working until I get near enough to see their paper.

Previously, in data analysis sessions since September:

Students were having trouble drawing any conclusions, noticing any patterns, or thinking about cause at all when they broke into groups to analyze data the class had generated.  It was never enough time, had always been too long since the measurements were taken, and they had too little background knowledge.  They floundered and fussed, getting increasingly annoyed and disoriented, while I tried to make them think by sheer force of will, running around steering them away from insignificant details (like, “all the voltages are even numbers”).

Lesson #1: Procedural Fluency

This semester started off the same.  I asked them to characterise the I vs. V response of a lightbulb and of an LED, to look for similarities and differences.  As I wrote previously, most of them were up to their gills just trying to wrestle their measurement equipment into submission.  They finally complete their measurements, but without any awareness of what was going on, what that graph mean, etc.

At my wits’ end, I had them do it again with two other models of diode.  To me, this felt almost punitive — like handing someone an identical worksheet and telling them to start over.  To try to make it a bit more palatable, I seized on their frustration about how “Mylène’s labs are so LONG” and told them we weren’t going to cover anything new — we were just going to do some speed practice, so I could show them some techniques for increasing speed without sacrificing accuracy.

I helped them strategize about how to set up a table for measurements (they were writing their measurements out in paragraphs… yikes).  I also got much more directive than usual, and informed them that everyone was required to used two meters simultaneously (many were using a single meter to switch back and forth between measuring voltage and current… with attendant need to unhook the circuit TWICE for every data point!!).  There was big buy-in for this, as they immediately saw that they were going to get an entire data set in a single class.  I saved a few minutes at the end of class for students to share their own time-saving ideas with their classmates.

What I didn’t realize was that they had internalized so little information about diodes that blue LEDs seemed like a whole different project than red LEDs.  I was worried they would mutiny about being forced to redo something they’d already finished, but I was wrong.  They welcomed, with relief, the opportunity to do something that was recognizable, with a format and a set of instructions that they had already worked the kinks out of.  Moral of the story: it’s the background knowledge, stupid.  (I can hear Jason Buell‘s voice in my head all the time now).

Lesson #2: Distributed Practice

I also realized that asking this group to sit down with some data and analyze the patterns in an hour is not going to happen.  I figured it mostly about having enough time (and not feeling pressured) so I started requiring them to keep track of “what did you notice?  what did you wonder?” while they were measuring.  After they were done measuring, I also required them to write some notes to themselves: explanations of anything in the lab that supported the model, and questions about anything that wasn’t supported by the model or that seemed weird (“When you find some funny, measure the amount of funny.” [Bob Pease of National Semiconductor, probably apocryphal]).

That meant they could take their time, tease out their thoughts, and write down whatever they noticed.  When it was time to sit down in data analysis session, they had already spent some time thinking about what was significant in their measurements.  They had also documented it.

Lesson #3: Expect them to represent their own data

In the past, I’ve made a full record of the class’s data and given a copy to every students.  My intention was that they would come through the evidence in a small group — maybe splitting up the topics (“you look at all the red LEDs — do they all turn on at 1.7?  I’ll check the blue ones”) — and everyone would be able to engage with the conversation, no matter whose data we were discussing.  My other intention was that they would take better notes if they knew other students would read them.  It worked last year … but this year I got extremely tidy notes, written out painstakingly slowly so the writing was legible… with measurements buried in paragraphs.

Last week, I asked everyone to get into small groups with people who were not their lab partner.  They were not required to analyze the whole class’s data — only the data of the people in the small group, who would be expected to explain it to the others.

The students loved it because they were analyzing 4 data sets, not 9.  So they were happy.  I was happy too, because, from out of nowhere, the room exploded in a fury of scientific discourse.  “Oh?  I got a different number.  How did you measure it?”  “Does everybody have…?” “Will it always be…?” “Why wouldn’t it…?” “That’s what we’d expect from the model, because…

I was floored.  Since I didn’t have to run around putting out fires, I found my brain magically tuned in to their conversations — I filled an entire 8.5×11 sheet full of skillful argumentation and evidence-based reasoning that I overheard.  Honestly, I didn’t hear a single teleological, unscientific, or stubbornly antagonistic comment.    Most days I can’t do this at all — I’m too overwhelmed to hear anything but a buzzing cacophony, and they’re too tense to keep talking when I get close.They didn’t even stop talking when I wandered near their desks — they were all getting their foot in the door, making sure their data made the final cut.

It slowed down a bit when I reminded them that they had to have at least one possible physical cause for anything they proposed (i.e. “the materials and design of the diode cause it to not conduct backwards” is not a cause).  But they picked it back up, with awesome ideas like

  • Maybe the diode acts like a capacitor — it stores up a certain amount of energy
  • Maybe the diode only takes whatever energy it needs to light up, and then it doesn’t take any more
  • Maybe the lightbulb’s resistance went up because it’s a very narrow filament, but it has low resistance.  So when all the current rushes in, there’s no room for more electrons, and that restricts current.
  • Maybe a diode has a break inside, and it takes a certain amount of voltage to push the electrons through the gap.  It’s like shooting electrons out of a cannon — they need a certain force to make it over a ravine.
  • How come electrons in a silicon crystal “bond” and make a pair?  I thought they orbit around the nucleus because electrons repel each other.
  • If a leaving electron creates a positive ion, wouldn’t that attract the same electron that left?

These are not canonical, of course.  But they’re causes!  And questions!   And they have electrons!!  I was so excited.  The students were having fun too — I can tell because when they’re having fun, they like to make fun of me (repeating my stock phrases, pretending to draw from a deck of cards to cold call someone in the audience, etc etc.)

Moral of the story

1. During measurement, you must write down what you noticed, what you wondered/didn’t know.

2. After measurement, you must write down which parts of this the model can explain (students call this “comparing the data to the model.”)  This causes students to actually pull out the model and read it.  Awesome.

3. Anything that can’t be explained by the model?  Articulate a question about it.

4.  If that’s still not working well, and I’m still getting into a battle of wills with students who say that the model doesn’t explain anything about diodes, do the same lab again.  Call it speed practice.

Then, when we share data and propose new ideas to the model, they’ve already spent some time thinking about what’s weird (no reverse current in a diode), what’s predicted surprisingly well by the model (forward current in a diode) and what’s predicted surprisingly badly (current in a lightbulb).  When we sit down to analyze the data, they’re generating those ideas for the second or third time, not the first.

5. Stop making copies of everyone’s data — it allows one strong and/or bossy student to do all the analyzing.  Require that the whiteboards include an example from every person’s data.

6. Watch while they jump in to contribute their own data, compare results and ideas about “why,” facilitate each others’ participation, summarize each others’ contributions, challenge, discuss, and pick apart their data according to the model.

7.  Realize that since I’m less overwhelmed with needing to force them to contribute constructively, I too have much more cognitive capacity left over for listening to the extremely interesting conversations.

Last year, I accidentally fell into an inquiry-driven style of teaching.  This year, I set out to do it on purpose.  Like Brian Frank’s example of students who do worse on motion problems after learning kinematics equations, my performance went down.  Unlike in that example, though, inquiry is a sense-making tool for the teacher, not just the students, so I’m doing more sense-making, not less.  The upshot: my awareness has increased while my performance has decreased.  (The proper spelling is A-N-X-I-E-T-Y).

Things that improved

I added a unit called “Thinking Like a Technician,” where students practice summarizing, clarifying, and identifying physical causes, using everyday examples.  When we got to atomic theory, they were less freaked out by the kind of sense-making I was asking them to do.

I started using a whiteboard template, based on Jason’s writing about Claim-Evidence Reasoning.  Like Jason, I introduced it to students as “Evidence-Claim-Reasoning.”  The increased organization of whiteboards makes things flow more smoothly for whiteboard authors when the discussion happens a few days after the analysis.  The standard layout lowers the cognitive load for students in the audience, since they know what to expect and look for.

The major tactical error I made

Last year I started with magnets and right away focused on students’ ideas about how atoms cause magnetic phenomena.  That means that our first area of inquiry was atoms.  This year, I thought I was being smart and started by digging into what students wondered about electricity.  BIG MISTAKE.  Students wonder a lot about electricity — mostly about how you can get electrocuted, or how to give someone else a shock. It was fascinating reading for me, but they have absolutely no tools for making sense of the answers to their questions.  The conventional wisdom about “electrons always seek ground” and “electricity always takes the path of least resistance” doesn’t help.  Since they start with neither foundational knowledge about electrons nor measurement technique with a multimeter, their attempts to either research or measure their way towards coherent ideas were random and pretty fruitless. As usual, Brian sums it up — I had backed us into a corner where “this makes no sense and right now we have no tools for making sense of it.

We are finally recovering (about 6 weeks later… *sigh*).  Some useful things got accomplished in the mean time — noticing and measuring the discrepancies between meters, figuring out some things about batteries along the way (which will help in the next unit).  Note for next time: start with atoms.  Atoms are in concrete things like chairs and sweaters — it avoids the need to start with the jumble of ideas called “electricity” (power/charge/energy/voltage/current/potential/etc.).  Also, give a quiz earlier on about meter technique.  It helped students strengthen understandings that would have been helpful a month ago.

Last thoughts

When engaging in a new strategy (whether for students or me), make sure it has some form of sense-making built-in.

Also, make sure the rest of life is not chaotic and stressful while doing these experiments.  The existential angst can be a bit much.

How I got my students to read the text before class: have them do their reading during class.

Then, the next day, I can lead a discussion among a group of people who have all tangled with the text.

It’s not transformative educational design, but it’s an improvement, with these advantages:

  1. It dramatically reduces the amount of time I spend lecturing (a.k.a. reading the students the textbook), so there’s no net gain or loss of class time.
  2. The students are filling in the standard comprehension constructor that I use for everything — assessing the author’s reasoning on a rubric.  That means they know exactly what sense-making I am asking them to engage in, and what the purpose of their reading is.
  3. When they finish reading, they hand in the assessments to me, I read them, and prepare to answer their questions for next class.  That means I’m answering the exact questions they’re wondering about — not the questions they’ve already figured out or haven’t noticed yet.
  4. Knowing that I will address their questions provides an incentive to actually ask them.  It’s not good enough to care what they think if I don’t put it into action in a way that’s actually convincing to my audience.
  5. Even in a classroom of 20 people, each person gets an individualized pace.
  6. I am free to walk around answering questions, questioning answers, and supporting those who are struggling.
  7. We’re using a remarkable technology that allows students to think at their own pace, pause as often/long as they like, rewind and repeat something as many times as they like, and (unlike videos or podcasts) remains intelligible even when skipping forward or going in slow-mo.  This amazing technology even detects when your eyes stray from it, and immediately stops sending words to your brain until your attention returns.  Its battery life is beyond compare, it boots instantly, weights less than an iPod nano, can be easily annotated (even supports multi-touch), and with the right software, can be converted from visual to auditory mode…

It’s a little bit JITT and a little bit “flipped-classroom” but without the “outside of class” part.

I often give a combination of reading materials: the original textbook source, maybe another tertiary source for comparison — e.g. a Wikipedia excerpt, then my summary and interpretation of the sources, and the inferences that I think follow from the sources.  It’s pretty similar to what I would say if I was lecturing.  I write the summaries in an informal tone intended to start a conversation.  Here’s an example:

And here’s the kind of feedback my students write to me (you’ll see my comments back to them in there too).

 

Highlights of student feedback:

Noticing connections to earlier learning

When I read about finite bandwidth, it seemed like something I should have already noticed — that amps have a limit to their bandwidth and it’s not infinite

Summarizing

When vout tries to drop, less opposing voltage is fed back to the inverting input, therefore v2 increases and compensates for the decrease in Avol

Noticing confusion or contradiction

What do f2(OL) and Av(OL) stand for?

I’m still not sure what slew-induced distortion is.

I don’t know how to make sense of the f2 = funity/Av(CL).  Is f2 the bandwidth?

In [other instructor]’s course, we built an audio monitor, and we used an op amp.  We used a somewhat low frequency (1 KHz), and we still got a gain of 22.2  If I use the equation, the bandwidth would be 45Hz?  Does this mean I can only go from 955 Hz to 1045 Hz to get a gain of 22.2?

Asking for greater precision

What is the capacitance of the internal capacitor?

Is this a “flipped classroom”?

One point that stuck with me about many “flipped classroom” conversations is designing the process so that student do the low-cognitive-load activities when they’re home or alone (watching videos, listening to podcasts) and the high-cognitive-load activities when they’re in class, surrounded by supportive peers and an experienced instructor.

This seems like a logical argument.  The trouble is that reading technical material is a high-cognitive-load activity for most of my students.  Listening to technical material is just as high-demand… with the disadvantage that if I speak it, it will be at the wrong pace for probably everyone.  The feedback above is a giant improvement over the results I got two years ago, when second year students who read the textbook would claim to be “confused” by “all of it,” or at best would pick out from the text a few bits of trivia while ignoring the most significant ideas.

The conclusion follows: have them read it in class, where I can support them.

I’m teaching embedded systems programming for the first time this year.  The only other programming course I’ve taught used PLCs and a “ladder-logic” style language.  The students and I thought it was going well until most people bombed the mid-term.  I’m trying to improve on students’ ability to predict what a program will do in the hopes that it will help them make better design and debugging choices.

I start by issuing every student a PICDem 2 demo board (mostly for historical reasons — evaluating other embedded systems in a project for next summer, so feel free to throw your suggestions in the comments). C is not high my list for students who’ve never programmed before — especially not embedded C.  But there you have it.

On the first day, I started by handing out the boards and getting everyone to push all the buttons.  I hand out a worksheet that asks students to explore the board.

The worksheet asks students to

  • Make a list of everything the demo board can do (to get them thinking about the feature set)
  • Find some of the important hardware on both the PCB and on the schematic (to build fluency with switching between the two)
  • Keep track of questions that come up while they do that (in a style slightly reminiscent of Cornell notes or Cris Tovani‘s double-entry diary technique for reading comprehension.  Is it fair to call this “reading” the board?)

When everyone had had a good time making shrill buzzer noises, I went around the room gathering every feature we could think of, and gathering questions too.    If you want to see what my students are curious about, take a look at our mind map (names removed) and click on the “Programmable Systems” section.  Highlights:

  • How does the micro convert an analog voltage to 0s and 1s?  Does it add in tiny increments?  There’s gotta be some kind of estimating — it must round.
  • What’s the other number shown on the LCD beside the voltage?  It goes from 0 when the pot’s turned all the way down, to 1023 when it’s turned all the way to 5V.  It’s 511 when the voltage is 2.5V.  So that’s got to be 1 byte of something.  [Bless the heart of the demo app designers who put the raw A/D count on the display — M.]
  • How do you change the frequency of the buzzer?  Is it an LC circuit?
  • Is it using a timer to control the time between the buzzer voltage transitions?
  • What would happen if you changed the duty cycle?
  • Is there a counter?  Where is it?
  • So it doesn’t really know what time it is — it’s just counting oscillator transitions since it was turned on.

 

I love the way they’re making connections to their course work on digital systems (counters, timers, relationship between frequency and duty cycle, the significance of 1023) and AC circuits (LC oscillators).  They’re asking relevant questions, making significant observations, making supported inferences, and getting excited about figuring out “what makes it do that” (which might be my “mantra” for the program).  These questions will drive my approach to the next few weeks.

It’s all too easy for me to fall into the pattern of letting a few students do most of the talking.  I’ve gotten better at asking everyone to think for 30 seconds before allowing someone to talk through their answer.  But that still means that a few students always volunteer.

I’m tempted to tell students to “only raise your hand if you have a question,” so that I can call on them to give answers.  As a way of randomizing my choices, I’ve considered everything from the elementary-school go-to idea of popsicle sticks with names on them, to web-based random-name choosers, to carrying around a 20-sided die.  But popsicle sticks seemed a little childish, software means waiting for a computer to boot and a browser to launch, and the die will seem suspiciously un-random unless I can expose the lookup table.

Courtesy of Jim B L via Flickr

The answer came to me yesterday as I was rummaging through my junk drawer.  Playing cards are just the right tool for the job.  I pulled 17 cards out of an old deck, wrote a student’s name on the face of each one, wrapped an elastic around them, and threw them in my bag.  Possibilities:

  • Use them to call on students equally, by taking the card out of the deck when they’ve answered
  • Use them to call on students randomly, by putting the card back into the deck
  • Get a student to shuffle them at the beginning of class
  • Let a responding student choose a card from the deck if they need help answering a question
  • Use them as something to hold on to/shuffle as a reminder to slow down and not interrupt when students are talking or working
  • Break out a card trick when students are exhausted or zoned out or we desperately need some levity.

I brought them to class yesterday and used them to call on students equally.  It was easy, unobtrusive, and wasted zero class time.  The students seemed to think it was funny.  Incidentally, I took all the face cards out of the deck before labeling them, in case anyone had a reaction to the value of the card their name was on (they didn’t).  Bonus: it took care of my attendance too (missing students went in a separate pile), always helpful at this time of year when I’m still second-guessing myself on some of the names.

As a way to have students and faculty introduce themselves, I stole the “snowball” exercise from Kate Nowak.  Quick summary: each person writes 3 distinctive things about themselves on a piece of paper, crumples it up, and gently tosses it somewhere in the room.  We then all pick up a new piece of paper and walk around introducing ourselves until we find its owner.

I really wondered whether this would work.  Was it too goofy for a trade school?  Would students find it patronizing?  Would it reinforce the image that trade school is slack or unrigorous?  Nope.  It went over perfectly.  It has just the right combination of professional attitude (you have to walk around shaking hands and introducing yourself politely to strangers) and personality.  I also appreciated that, unlike some other icebreaker games, it doesn’t put people in the awkward situation of having to discuss a subject they’re not comfortable with — everyone gets to choose what to share.

When everyone had found the person they were looking for, they introduced each other.  As that happened, I asked each person to repeat the names of all the people before them.  Faculty went last, naturally.

The students seemed quite comfortable with it.  I’m glad to see that they now know my name as well as their classmates’.  I also got positive feedback from both of the other faculty, who were glad to have learned the students names quickly.

On the second day, I had students do the Marshmallow Challenge.  This led to great conversations about prototyping, the value of trying things that we don’t fully understand, why learning looks like failure sometimes.  I worked on bringing out the message that “practice makes better,” that we as a group can generate knowledge that contributes to everyone’s understanding.

So, I decided to do the marshmallow challenge twice — once at the beginning of the morning and once at the end.  To keep it from being boring, I shortened the time to 10 minutes.  I have mixed feelings about this.  In the first attempt, there were 4 standing towers.  In the second attempt, they were much taller, but there were only 2 of them.  It gave me a chance to talk about why, when you are learning, sometimes it looks like you’re not improving.  But it also meant that two groups went home looking a little deflated.  Maybe next time I’ll do the second iteration on the second day.

Note: there’s nothing wrong with measuring the towers in centimeters, then showing the video where Tom Wujec discusses average heights in inches.  It reduces the amount of “is mine higher or lower than average” and motivates the next day’s math class about conversion factors.

Last semester, as I stumbled into inquiry-based teaching, there were times when I wanted the students to learn something specific at a specific time.  For example, how to use an ammeter without blowing the fuse.

Option 1: I make a research presentation for acceptance to the model

It wasn’t perfect, but my solution was to propose something for the model myself.  I would prepare a 3-min presentation, bring 2 sources, and ask the students to evaluate them.  In the context of dozens of student presentations, I made 5 throughout the semester, so it kept my talk ratio fairly low.

Advantages: it gave me the opportunity to show them what I expected in a presentation and in an analysis of the sources.  It also gave me the opportunity to ask them for feedback about specific things, like “making the presentation as short as possible but no shorter” or “keeping the presentation focused.”

Disadvantage: it would be difficult for students to reject my arguments (they never did).  However, they did sometimes propose to rephrase them for clarity or precision.  They used the Rubric for Assessing Reasoning to formulate about my logic.  This is certainly an improvement over what I was doing before.

Option 2: I put it on the skill sheet

I keep my skills-based grading system.  Going with the flow of student questions meant that sometimes we jumped around between units.  Occasionally I removed a skill from the list, if it was clear that the relevant outcome was being met some other way.  However, it was gratifying to realize that the things I put on the skill sheets were mostly things we ended up doing in the course of answering our questions.  In other words, the skills in the beginner course were things that beginners would actually care about while they were in the process of beginning to learn the topic.

If I needed the students to learn something specific that was measurable, I would put it on the “Shop” side of the skill sheet.

Every week during our shop period, I would write on the board the questions that had come up that week.  I would also write any skills that I wanted them to demonstrate by the end of the day.  They were always skills you would need in order to explore the questions.  If the skill wasn’t needed to explore our questions, then I didn’t need to teach it right that minute, did I.

Things that dematerialize my patience in the classroom: magical thinking, begging of questions, and conclusions that don’t follow from their premises.  The real reason I was foaming at the mouth wasn’t poor quality reasoning; it was that I had no way to respond to it.  The language I use to discuss these ideas is so foreign to my students that I didn’t know where to start.

One of the changes I made on purpose this year was defining criteria for good-quality reasoning.  I hoped it would help us develop a shared vocabulary for judging the validity of inferences without the need for me to teach symbolic logic.  I spent last summer reading Academically Adrift (my review here) about students’ lack of improvement in critical thinking during their post-secondary schooling, and a research paper about approaches to teaching critical thinking (seriously, go read this thing, especially the profiles of actual teachers.  It’s fascinating).  The former focused on how seldom students are asked to make or break an argument.  The latter contained interviews with faculty who conflated good-quality reasoning with self-direction, reflectiveness, or constructed beliefs.  In both cases, I recognized myself entirely too much.  I was galvanized.

I decided to start with these criteria for good-quality reasoning:

  1. clarity
  2. precision
  3. internal consistency
  4. connection to our model
  5. connection to our experience/intuition
  6. seamless chain of cause and effect
  7. consideration of other equally valid perspectives
  8. credible sources

I chose them from a variety of options.  The Foundation for Critical Thinking suggests

  • Clarity
  • Precision
  • Accuracy
  • Significance
  • Relevance
  • Logic
  • Fairness
  • Breadth
  • Depth

I toyed with these all summer, checking them against examples I encountered that I thought were particularly well or poorly reasoned.  They held up pretty well, but I wanted something more student-friendly, and there were things I wanted to add.  I knew that meant I had to remove some, since this would be a lot for students to digest.

First I removed “fairness,” as less important than the others in what is essentially an intro physics course.  We’re not talking about ethical dilemmas here (well, OK, we are, but we’re not assessing them).  Breadth and depth are important but, again, it’s an intro course: it’s intended to be neither of those.  Accuracy was the only standard my students would recognize, but my point was to help them judge the quality of their thinking when they didn’t know the answer. Significance and relevance were almost indistinguishable.  I put them in the second tier — things we could get to later.

Clarity, Consistency, Causality

“Logic” was a problem.  I knew from previous experience that my students defined “logical” as a nebulous cross between “familiar” and “reinforcing my preconceptions.”  I had to stay away from that word.  So I broke it down into exactly what I mean by it: internal consistency, coherence with other accepted ideas, and distinguishing cause from correlation.

I kept clarity and precision separate, because I wanted to use them differently.  Clarity is for asking “what do you mean exactly?”  For example, we might start with “The voltage goes through.” Asking “What do you mean by ‘voltage’ exactly?” and “What do you mean by ‘through’ exactly?” will get us to “the electrons on one side of the resistor have more energy than the electrons on the other side.” Precision is for asking “in what direction” or “how much?”  It gets us to “on which side do the electrons have more energy?” or “how much of a difference is there?”  (It also motivated some fantastic conversations about the meaning of “significant figures” — post for another day).  It was a way to help students remember to ask these questions of themselves.

That yielded 1-4 and 6.

Plausibility Is Not Enough

Then I read a post at Educating Grace about how “making sense of things” sometimes caused us to make myths instead of understanding.  Grace asks the kinds of generative questions that makes me feel like I’m growing new synapses even if I can’t answer them, and this was one of them.  There’s an excellent example on Learning Museum about fooling ourselves with plausible answers. Number 7 was an attempt to at least consider that, once we have constructed a well-reasoned train of thought, we have to take a moment to check if there are others.

Authority Does Not Equal Credibility

Finally, #8 allowed me to open a conversation about which sources we should use, what it means to “believe” a teacher when they tell you something, what a “fact” is exactly, and lots of other good stuff like the difference between an opinion and a judgement.  I knew we’d have to talk about plagiarism and I wanted to motivate the conversation with our judgement of reliability, not an argument about where the commas go in APA style.

One of my favourite conversations started when the students got a bit combative about my stance toward Wikipedia.  “You tell us to use Wikipedia, and last year our teachers banned us from using Wikipedia,” they said, as if “teachers” are a single hive-mind that is hypocritical, and not a collection of individuals and institutions that sometimes disagree.  I got to practice another new gambit: “Why would a reasonable teacher do that?” I asked.  This diffused the combativeness and resulted in students considering a variety of points of view. “Wikipedia could be wrong” came up, of course.  I would have to wait a few weeks before we got to “the textbook could be wrong.  Or oversimplified.  Or begging the question.  Or poorly reasoned.”

I put them on notice that I required at least two sources for absolutely everything, including Wikipedia and including the textbook and including things their current or former teachers said.  My point is not that people should trust Wikipedia more.  It’s that they should trust everything else less.  Or that we need to redefine what “trust” means, exactly, in the context of “knowing.”

Archives

I’M READING ABOUT