You are currently browsing the category archive for the ‘Inquiry’ category.

I’ve been frustrated lately by my lack of focus and difficulty getting things done.  After accidentally venting on my public blog (rather than the private one I intended to use … *sigh*) I realized there were a few factors at play that could shed some light on my students’ experiences.

1 — High stakes can reduce performance.

The beginning of the year feels high-stakes to me because it’s the time when students are forming their first impressions, the time when expectations get set and rapport gets built.   I’m not saying that those things can’t change over the course of the year.  But I think it’s a lot easier to set an initial expectation than to correct it later, especially about my wacky grading system, my insistence that students “not believe their teachers,” and so on.
There are a bunch of fixes for this.  One is to trust that my intro and orientation activities (videos, Marshmallow Challenge, name game, Teacher Skill Sheet, etc.) set good groundwork for productive classroom culture.  These activities are well-defined — I can print out last year’s agenda and have a decent first week, which should lower the stakes on my successive lesson plans.  Another is to document more carefully what I’ve done, so that next year, when I’m going batty with all the beginning of the year logistics, I don’t add lesson planning to the cognitive load.

How this applies to my students: There are lots of situations that they see as high-stakes and in which they underperform (or just procrastinate their way out of).  Tests, scholarship applications, job applications.  Tests are now pretty low-stakes, but it would be great to do the same for job applications, interviews, etc. — maybe by staging a series of “early drafts.”

2 — Success can cause fear of failure

I’m really proud of what my inquiry class accomplished last year.  The same ideas about evaluating claims and making well-supported inferences run through not just the content but the process.  The classroom culture was better than I could have expected.  I want to do the same thing this year.  The only problem is that it caught me so off guard last year that very little is documented (certainly no daily lesson plans or learning activities for the first couple of months — just jot notes of my impressions or student comments).  It’s immobilizing to imagine doing it again without instructions — what if they fail to buy in to the entire inquiry approach?

It feels like there’s a narrow range of introductions that make everything work out, and if I miss it, I’ll have to go back to lecturing.  Hey, stop that laughing!  I know, I rail against my students’ unwillingness to do things without instructions.  In my defense, there is a small difference: they can reassess their lab skill over and over within a few days.  Whatever I do with my class, it affects their trust in me in ways that cannot be fully undone, and I don’t get to reassess that particular moment until next year.

Fix: document my learning activities thoroughly this year.  Next year I might modify them or toss them out, but at least they’ll be there for those days when I just need to repeat something.

How this relates to my students: I’m not sure what to do here besides what I’m already doing: each assessment attempt is low-stakes, and there’s a wide range of possible good answers for almost everything.  The feeling of having fluked into something can really mess with your head (even if, in my case, I think luck was a small element, dwarfed by hard work and obsessive preparation).

3 — No net.

It feels like there’s no net because the peer-reviewed research community” setup I’m using depends heavily on the good-will of the students.  If any significant chunk decided to zone out, the system would not work.  If there aren’t a critical mass of students writing up papers and giving feedback, then there simply is no course.  If I had a group where absolutely no one was willing to make a good-faith effort, then I suppose I could lecture and assign problem sets (yes, I kept them from my first year).  The reality is that that’s unlikely to happen.  My students tend to be highly-motivated and with a wide age range (the oldest easily double the age of the youngest).  They appreciate being trusted to think.

Fix: no fix needed.  Especially in a group of 17 (as I have this year).

I wonder what kinds of things  in my students’ lives feel like there is no net?

In the same vein as the last post, here’s a breakdown of how we used published sources to build our model of how electricity works.

  1. I record questions that come up during class.  I track them on a mind-map.
  2. I pull out the list of questions and find the ones that are not measurable using our lab equipment, and relate to the unit we’re working on.
  3. I post the list at the front of the room and let students write their names next to something that interests them.  If I’m feeling stressed out about making sure they’re ready for their impending next courses/entry into the work world, I restrict the pool of questions to the ones I think are most significant.  If I’m not feeling stressed out, or the pool of questions aligns closely with our course outcomes, I let them pick whatever they want.
  4. The students prepare a first draft of a report answering the question.  They use a standard template (embedded below).  They must use at least two sources, and at least one source must be a professional-quality reference book or textbook.
  5. I collect the reports, write feedback about their clarity, consistency and causality, then hand back my comments so they can prepare a second draft.
  6. Students turn in a second draft.  If they have blatantly not addressed my concerns, back it goes for another draft.  They learn quickly not to do this.  I make a packet containing all the second drafts and photocopy the whole thing for each student. (I am so ready for 1:1 computers, it’s not funny.)
  7. I hand out the packets and the Rubric for Assessing Reasoning that we’ve been using/developing.  During that class, each student must write feedback to every other student. (Note to self — this worked with 12 students.  Will it work with 18?)
  8. I collect the feedback.  I assess it for clarity, consistency, and usefulness — does it give specific information about what the reviewee is doing well/should improve.  If the feedback meets my criteria, I update my gradebook — giving well-reasoned feedback is one of the skills on the skill sheet.
  9. If the feedback needs work, it goes back to the reviewer, who must write a second draft.  If the feedback meets the criteria (which it mostly did), then the original goes back to the reviewer, and a photocopy goes forward to the reviewee.  (Did I mention I’m ready for 1:1 computers?)
  10. Everyone now works on a new draft of their presentation, taking into account the feedback they got from their classmates.
  11. I collect the new drafts.  If I’m not confident that the class will be able to have a decent conversation about them, I might write feedback and ask for another draft. (Honest, this does not go on forever.  The maximum was 4, and that only happened once.) I make yet another packet of photocopies.
  12. Next class, we will push the desks into a “boardroom” shape, and some brave soul will volunteer to go first.  Everyone takes out two documents: the speaker’s latest draft, and the feedback they wrote to that speaker.

The speaker summarizes how they responded to people’s feedback, and tells us what they believe we can add to the model.  We evaluate each claim for clarity, consistency, causality.  We check the feedback we wrote to make sure the new draft addressed our questions.  We try to make it more precise by asking “where,” “when,” “how much,” etc.  We try to pull out as many connections to the model as we can.  The better we do this, the more ammo the class will have for answering questions on the next quiz.

Lots of questions come up that we can’t answer based on the model and the presenter’s sources.  Sometimes another student will pipe up with “I think I can answer that one with my presentation.”  Other times the question remains unanswered, waiting for the next round (or becoming a level-5 question).  As long as something gets added to the model, the presenter is marked complete for the skill called “Contribute an idea about [unit] to the model.”

We do this 4-5 times during the semester (once for each unit).

Example of a student’s first draft

I was pretty haphazard in keeping electronic records last semester.  I’ve got examples of each stage of the game, but they’re from different units — sorry for the lack of narrative flow.

This is not the strongest first draft I’ve seen; it illustrates a lot of common difficulties (on which, more below).  I do want to point out that I’m not concerned with the spelling.  I’ve talked with the technical writing instructor about possible collaborations; in the future, students might do something like submit their paper to both instructors, for different kinds of feedback.  I’m also not concerned with the informal tone.  In fact, I encourage it.  Getting the students to the point where they believe that “someone like them” can contribute to a scientific conversation, must contribute to that conversation, or indeed that science is a conversation, is a lot of ground to cover.  There is a place for formal lab reports and the conventions of intellectual discourse, but at this point in the game we hadn’t developed a need for them.

Feedback I would write to this student

Source #1: Thanks for including the description of what the letters mean.  It improves the clarity of the formula.”

Source #2: It looks like you’ve used the same source both times.  Make sure to include a second source — see me if you could use some help finding a good one.

Clarity: In source #1, the author mentions “lowercase italic letters v and i…” but I don’t see any lower case v in the formula.  Also, source #1 refers to If, but I don’t see that in the formula either. Can you clarify?

Cause: Please find at least one statement of cause and effect that you can make about this formula.  It can be something the source said or something you inferred using the model.  What is causing the effect that the formula describes?

Questions that need to be answered: That’s an interesting question.  Are you referring to the primary and secondary side of a transformer?  If so, does the source give you any information about this? If you can’t find it, bring the source with you and let’s meet to discuss.

Common trouble spots

It was typical for students to have trouble writing causal statements.  I’m looking for any cause and effect pair that connect to the topic at hand.  I think the breadth of the question is what makes it hard for students to answer.  They don’t necessarily have to tell me “what causes the voltage of a DC inductor to be described by this formula” (which would be way out of our league).  I’d be happy with “the inductor’s voltage is caused by the current changing suddenly when the circuit is turned on,” or something to that effect.  I’m not sure what to do about this, except to demonstrate that kind of thinking explicitly, and continue giving feedback.

It was also common for students to have trouble connecting ideas to the model.  If the question was about something new, they would often say “nothing in the model yet about inductors…” when they could have included any number of connections to ideas about voltage, current, resistance, atoms, etc.  I go back and forth about this.

In the example above, I could write feedback telling the student I found 5 connections to the model in my first three minutes of looking, and I expect them to find at least that many.  I could explicitly ask them to find something in the model that seemed to contradict the new idea (I actually had a separate section for contradictions in my first draft of the template).  That helped, but students too often wrote “no contradictions” without really looking.  Sometimes I just wait for the class discussion, and ask the class to come up with more connections, or ask specific questions about how this connects to X or Y.  This usually works well, because that’s the point at which they’re highly motivated to prevent poorly reasoned ideas from getting in to the model.  Still thinking about this.

Example Student Feedback

(click through to see full size)

I don’t have a copy of the original paper on “Does the thickness of wire affect resistance,” but here is some feedback a classmate wrote back.

Again, you can see that this student answered “What is the chain of cause and effect” with “No.”  Part of the problem is that this early draft of the feedback rubric asks, in the same box, if there are gaps in the chain.  In the latest draft, I have combined some of the boxes and simplified the questions.

What’s strong about this feedback: this student is noticing the relationship between cross-sectional area of a wire (gauge), and cross-sectional area of a resistor.  I think this is a strong inference, well-supported by the model.  The student has also taken care to note their own experience with different “sizes” of resistor (in other words, resistors of the same value that are cross-sectionally larger/smaller).  Finally, they propose to test that inference.  The proposed test will contradict the inference, which will lead to some great questions about power dissipation.  Here the model is working well: supporting our thinking about connections, and leading us to fruitful tests and questions.

Example of my first draft

Sometimes I wrote papers myself.  This happened if we needed 12 questions answered on a topic, but there were only 11 students.  It also happened when we did a round of class discussions only to realize that everyone’s paper depended on some foundational question being answered, but no one had chosen that question.  Finally, I sometimes used it if I needed the students to learn a particular thing at a particular time (usually because they needed the info to make sense of a measurement technique or new equipment). This gave me a chance to model strong writing, and how to draw conclusions based on the accepted model.  It was good practice for me to draw only the conclusions that could be supported by my sources — not the conclusions that I “knew” to be true.

I tried to keep the tone conversational — similar to how I would talk if I was lecturing — and to expose my sense-making strategies, including the thoughts and questions I had as I read.

In class, I would distribute my paper and the rubrics.  Students would spend the class reading and writing me some feedback.  I would circulate, answering questions or helping with reading comprehension.  I would collect the feedback and use it to prepare a second draft, exactly as they did.  If nothing else, it really sold the value of good technical writing.  The students often commented on writing techniques I had used, such as cutting out sections of a quote with ellipses or using square brackets to clarify a quote.

Reading student feedback on my presentations was really interesting.  I would collect their rubrics and use it to prepare a second draft.  The next day, I would discuss with them my answers and clarifications, and they would vote on whether to accept my ideas to the model.  At the beginning of the year they accepted them pretty uncritically, but by the end of the year I was getting really useful feedback and suggestions about how to make my model additions clearer or more precise.

I wish I had some student feedback to show you, but unfortunately I didn’t keep copies for myself.  Definitely something I will do this year.

How It’s Going

I’m pretty satisfied with this.  It might seem like writing all that feedback would be impossible, but it actually goes pretty quickly.

Plan for improvement: Insist on electronic copies.  Last year I gave the students the choice of emailing their file to me or making hard copies for everyone and bringing to class.  Because bringing hard copies bought them an extra 12 hours to work on it, many did that.  But being able to copy and paste my comments would help.  Just being able to type my comments is a huge time-saver (especially considering the state of my hand-writing).

The students benefit tremendously from the writing practice, the thinking practice and, nothing to sneeze at, the “using a word-processor correctly” practice.  They also benefit from the practice at “giving critical feedback in a respectful way,” including to the teacher (!), and “telling someone what is strong about their work, not just what is weak.” If their writing is pretentious, precious, or unnecessarily long, their classmates will have their heads.  And, reading other students’ writing makes them much more aware of their own writing habits and choices.

I’m not grading the presentation, so I don’t have to waste time deliberating about the grade, or whether it’s “good enough.”  I just read it and respond, in a fairly conversational way.  It’s a window into my students’ thinking that puts zero pressure on me, and very little pressure on the students — it’s intellectually stimulating, I don’t have to get to every single student between 9:25 and 10:20, and I can do it over an iced coffee on a patio somewhere.  I won’t lie — it’s a lot of work.  But not as much work as grading long problem sets (like I did in my first year), way more interesting, and with much higher dividends.

Resources

MS Word template students used for their papers

Rubric students used for writing feedback.  Practically identical but formatted for hand-written comments

I promised, months ago, to write about

an example of a measurement cycle, including how I chose the questions, why they arose in the first place, and how students investigated them

I’ve tried all summer to write this blog post and failed, mostly because I’m discovering the weaknesses in my record-keeping.  I’m going to answer as much of the question as I can, then make a few resolutions for improving my documentation.

Last year in DC Circuits (then in AC Circuits, and in Semiconductors 1 and 2), our days revolved around building and refining a shared model of how electricity works.  There were two main ways we built the model:

  • measuring things, then evaluating our measurements (aka “The Measurement Cycle”)
  • researching things, then evaluating the research (aka “The Research Cycle”)

The measurement cycle

  1. In the process of evaluating some research or measurements, new questions come up.  I track them on a mind-map (shown above).
  2. When I’m prepping our shop day, I pull out the list of questions and find the ones that are measurable using our lab equipment.
  3. I choose 4-6 questions.  I’m ideally looking for questions that have obvious connections to what we’ve done before, that generate new questions, and that are significant in the electronics-technician world (mostly trouble-shooting and electrical sense-making).
  4. Things I think about:  What are some typical ways of testing these questions?  Do the students know how to use the equipment they will need?  Is it important to have a single experimental design, or can I let each lab group design their own?  Is there a lab in the lab-book with a good test circuit?  Is there a skill on the skill sheet that will get completed in the course of this measurement? The answers to these questions will become my lesson plan.
  5. At the beginning of the shop period, I post the questions I expect them to answer and skills I expect them to demonstrate.  We have a brief discussion about experimental design.  Sometimes I propose a design, then take suggestions from the class about how to clarify it or improve it.  Sometimes I ask the lab groups to tell me how they plan to test the question.  Sometimes, I just ask for a “thumbs up/down/sideways” on their confidence that they can come up with a design and, if they’re confident, I turn them loose.
  6. If they will need a new tool to test the questions, we develop and document a “Hazard ID and Best Practice” for that tool.  (More on this soon…)
  7. The students collect data — one data point for each question.  When they finish (and/or, if they have questions), they put their names up on the white board.
  8. When a group finishes, they have to walk me through their data.  I check their lab record against our “best practices for shop notebooks” (an evolving collection of standards generated by the class), and point out where they need to clarify/make changes.  If their measurement process has demonstrated a skill that’s on the skill sheet, I sign it off.  Then I take pictures of their lab notes, and they are done for the day.  I run the pics through a document scanning app and generate a single PDF.
  9. On our next class day, everyone gets a copy of the PDF.  I break them into 4-6 groups, one for every question they tested.  No lab partners together in a group.  Each group analyzes everyone’s data for a single question, makes a claim, and presents it to the class.  The class helps the presenters reconcile any contradictions, then they vote on whether to accept the idea to the model.  This process generates lots of new questions, some of which can’t be answered.  They go on the list for next week.
  10. Repeat for 15 weeks.

Example from September

Students were evaluating their measurements to figure out “What happens to resistance when you hook multiple wires together?”  Here’s the whiteboard they presented to the class.  Lots of good stuff going on here: they’re taking note of the effect meter settings have on measurement, noticing that wires have resistance (even though they’re called “conductors,” not “resistors”), and they’re able to realize that the meter measures the resistance of the leads, as well as what’s connected to the leads.  In case you can’t read their claim, it says “Longer or more leads we connect and measure the resistance, more resistance we get.”

Questions students were curious about

Here’s where this inquiry-style stuff pays me dividends: I’m anticipating the path of future questions, and I’m thinking maybe it will be “what happens when you hook things up in parallel or in other arrangements.” I am so wrong.  The next question is, “is it exactly proportional?”  Whoa.  I love that they’re attending to the fact that things aren’t always proportional.

The next question surprises me even more.  It’s “If this works for test leads, does it work for light bulbs/hookup wires/resistors too?

I was kind of stunned by that.  At this point, the model includes the idea that resistance varies with length, cross-sectional area, and material.  This should lead us to expect different amounts of resistance from different materials, but not entirely different patterns of variance.  Especially between test leads and hookup wires!

On one hand, I’m afraid this means they think that light bulbs and hookup wire somehow obey fundamentally different physical laws than test leads.  Their willingness to imagine the universe as disconnected and patternless offends against my sense of symmetry, I guess.  I get over myself and realize that it’s awesome that they want their own sense of symmetry to be based on their own observations.  So, I add it to the list of questions for the following week.  I mentally curse the lab books of the world, which would have hustled the students past this moment without giving them a chance to notice their own uncertainty, which would then end up buried in their heads, a loose brick in the foundation of their knowledge, practically impossible to excavate.

How they investigated

The following week, we investigate “Is the change in resistance exactly proportional” and “do other materials do the same thing.”  In our beginning-of-class strategy session, I tease out what exactly they want to know.  Are they asking if the change in resistance is exactly proportional to … length?  number of leads?  What?  They want to know about length, so that’s settled.  There are lots of other questions on the docket that day, including

  • Is there resistance in a terminal block?
  • Can electrons get left behind in something and stay there? [I think this is a much more interesting way to ask it than the textbook-standard “Is current the same everywhere in a series circuit”]
  • If electrons can get stuck, would it be a noticeable amount?  Is that why a light dims when you turn it off?  Are they getting lost or are they slowed down?
  • Can more electrons pass through a terminal block than a wire?
  • If you connect light bulbs end-to-end, we expect the total resistance to go up, but what will happen to the current?  Is it the same as with one bulb?  Will there be less electrons in bulb 3 than in bulb 1?  Will the bulbs be dimmer as you go along?

They’re confident using the tools and materials, so I let them design their experiments however they want.

Some very cool experiments resulted.  To check if total resistance in series was additive, one group used light bulbs, one used resistors, and (my favourite) one group removed two entire spools of hookup wire from the storage cabinet and measured the resistance of the spools separately, then connected in series (as shown).

This generated some odd data and some experimental design difficulties: there was no easy way to figure out the length of the spool.  They could still tell that their data was odd, though, because the spools appeared visually to be about the same length, so whatever that length was, they should have roughly twice as much of it (short pause to appreciate that the students, not the teacher or textbook, made this judgement call about abstraction).  Or, at least, the resistance should be more than that of one spool.

And that’s not what happened.  If you look closely at the diagram, each spool appears to have 3 ends…  Note that the sentence at the bottom shows that they distrust their meter.  However, they did not fudge the data, despite not believing it was right.  I believe that this is my reward for not grading lab books.  Wait — not grading lab books is my reward for not grading lab books!

In the following class, this experiment generated no additions to the model but a mother lode of perplexity.  It also resulted in demands for a standardized way of recording diagrams [Oh OK, since you insist…], and questions about what happens when you hook up components “side-by-side” instead of “one after the other.”  And we’re off to the races again.

Speaking of standards for record-keeping…

It was really difficult to find the info for this blog post, because my record-keeping system last year was not designed to answer the question “how did questions arise.”  It was intended to answer the question, “Oh God, what the heck am I doing?”

Some changes that will help:

  • Using Jason Buell’s framework to keep whiteboards organized in Claim/Evidence/Reasoning style
  • In the PDF record of students’ measurements, including a shot of the front-of-class whiteboard where I recorded the agenda
  • Giving meaningful names to those PDF files.  “20110928 lab data” is not cutting it.
  • During class discussion, recording new questions next to the idea we were discussing when the question came up.  Similarly, on the mind-map, attaching new questions to the ideas/discussions that generated them..
  • Keeping electronic records of the analysis whiteboards (step 9 above), not just the raw data.  Maybe distribute these to students as well, to have a record for their notes when we inevitably have to revisit old ideas and re-evaluate them in light of new evidence.

Here’s a snapshot of our model as it existed around March — I have removed all student names for privacy, but I would normally track who proposed or asked what, and keep notes about the context in which questions arose.

We gathered evidence from our own measurements and from published sources.  The students proposed most of the additions to the model, but occasionally I proposed one. I’ll write in detail about each method next — this post covers some basic ideas common to all approaches. Additions to our shared mental model of electricity tended to be short, simple conceptual statements, like “An electron cannot gain or lose charge” or “Between the nucleus and the electrons, there is no ‘stuff.'”

Ground Rules

Here are the ground rules I settled on (though introduced gradually, and not phrased like this to the students).  I was flying by the seat of my pants here, so feel free to weigh in.

  1. Every student must contribute ideas to the model.
  2. For an idea to get added to the model, it must get consensus from the class.
  3. Deciding to accept, reject, or change a proposal must be justified according to its clarity, precision, causality, and coherence (not “whether you like the idea/presenter”).
  4. Each student is responsible for maintaining their own record of the model.
  5. Students may bring their copy of the model to any quiz.
  6. Quizzes will assess whether students can support their answers using the model.

1.  What if the class rejects someone’s proposal?

Adding an idea to the model is a skill on the skill sheet.  As with any other skill, students get lots of feedback, then they improve.  They have until the end of the semester.  I don’t allow them to move to the “class discussion” stage until I’m satisfied that they have a well-reasoned attempt that addresses previous feedback with a solid chance of getting something accepted. Unlike other skills, though, they’re getting individual feedback sheets from each of their classmates, not just me. Most people wrote two drafts.  No one wrote more than three.  More on this soon.

2. Consensus from the class — are you kidding?

I had a small class this year (12 students at the high-water mark) and I’m not sure how it will work with 20-24 next year — I’m thinking hard about how to scale this.  But yes, it really worked.  We used red-green-yellow flashcards to vote quickly.  I did not vote.  My role was to

  • Teach them how to have a consensus-focussed conversation
  • Point out supporting or conflicting ideas in the model, if the students didn’t — in other words, to “keep the debate alive [if] I’m not convinced they have compelling arguments on either side”
  • Do some group-dynamics facilitation, such as recording new questions that come up (for later exploration) as a result of the discussion, making sure everyone gets a chance to speak and is not interrupted, and sometimes doing some graphical recording on the whiteboard if the presenting students are too busy answering questions to do their own drawing.
  • Determine when the group is ready to move to a vote or needs to table the conversation

That made me a cross between the secretary and the speaker of the house.  Besides being productive, it was fun.  A note about group dynamics facilitation: I’m often frustrated by the edu-fad of re-labelling teachers “facilitators.”  I’ve done a lot of group dynamics facilitation for community groups.  It is a different role than teaching, and it’s disingenuous to pretend that students need only facilitation, not teaching.  However, in this situation, facilitation was called for.  The group had a goal (to accept or reject an idea for the model) and they had a process (evaluate clarity, precision, etc. etc.); my job was to attend to the process so they could attend to the goal.

3. What about people blocking consensus out of personal dislike, and other sabotage maneuvers?

There is a significant motivation for students to take part in good faith.  First off, no one wants these conversations to go on forever: they’re mentally challenging and no one has infinite stamina.  Second, if a well-supported, generative idea is left out of the model, no one will be able to use it on the next quiz.  Third, if the class accepts a poorly supported idea, it will cause a huge headache down the road when ideas start to contradict; we’ll have to backtrack, search out the flaw in our reasoning, and then uproot all the later ideas that the class accepted based on the poorly reasoned idea.  They were darned careful what they allowed into their model.

Other than that, I used my power as the facilitator.  Conflict happens in any group, and the usual techniques apply.  It was almost always possible to resolve the conflict fairly quickly: modifying the wording, adding a caveat.  We’d vote again, the idea would get accepted, and we’d move on. Sometimes a student felt strongly enough about their objection to propose an experiment that we should try; unless the group could come up with a reason why we shouldn’t do that experiment, we’d just put the idea on hold, and I’d add that experiment to the list for that week.

Once, a student voted against a proposal but was unable to explain what they needed made clearer, more precise, more causally linked, etc.  It just “didn’t sound right.”  None of my usual techniques worked to draw out of that student what they needed to be confident that the proposal was well-reasoned, or just to feel heard.  So I reminded them that the model is always modifiable, that we can remove ideas as well as add them, and that we have committed to base our decisions on the best-quality judgement we can make, not “truth” or “feeling.”  I told them that I would consider the idea accepted for the purposes of using it on quizzes, but record the disagreement, and that if/when new information came up, we would revisit it.

An important point about facilitation: these conversations were sometimes fun and lighthearted, but sometimes gruelling for the students, especially as we moved into more abstract and unintuitive material.  The most important mistake I made was letting the conversation drag.  To remedy this, I used what I consider fairly aggressive facilitation — quickly redirecting rambling speakers, proposing time limits, summarizing and restating sooner than I otherwise might, etc.  If the conversation was so unclear that students weren’t able to even give feedback about how to improve, I had to diagnose that as soon as possible.  I would say something like “It looks like we’re not ready to move forward here.  Joe, let’s meet after class and see if we can figure out how to strengthen your proposal.”

4.  Students maintain their own record of the model

Benefits:

  • Students had a reason to take decent notes, at least about certain key ideas.
  • The list got really, really, long , and included a lot of topics, many of linked to each other.  It was a powerful visual indicator of just how huge of an endeavour this really is.
  • Most ideas fit clearly under more than one category.  It was up to the students to choose.  It was a good reminder that the new ideas aren’t divorced from the old ideas, that all the chapters in the textbook really do connect.

5. Every test is “open notes”?

In a way, yes.  I felt a bit weird about this — there are some things they really need memorized.  But it worked out really well.  I allow students to bring their copy of the model to any quiz (no other notes).  It must contain only the ideas voted on and accepted by the class — no worked problems, etc.  I circulate during the quiz and randomly choose some paper copies to take a close look at.   It was a non-issue.

About halfway through the semester, students gradually stopped bringing it.  We used that thing so much, for so many purposes, that they mostly didn’t need it.  Besides, like any quiz, if you have to look everything up, you won’t have time to finish.  Then you’ll have to apply for reassessment, which means doing some practice problems, which means building speed and automaticity, which means needing the notes less.

6.  “You’re going to grade us based on what we say?!”

I modified many quiz questions so that they said things like

Explain, using a chain of cause and effect, one possible reason for the difference in energy levels on either side of S1.  It doesn’t have to be “right” – but it must be backed up by the model and not have any gaps.

This worked extremely well.  It helped students enter the tentative space between “I get it” and “I don’t get it,” saying things like “Based on our model, it is possible that…”.  It gave me the opportunity to show a variety of well-reasoned answers (I sometimes used excerpts from student answers to open conversation in a later class).  It helped me banish my arch nemesis: the dreaded “But You Said.”  (Because I didn’t say.  They did.)

It’s been 4 whole months since I wrote about my classroom, and I had fallen far behind long before that.  The short story is this: in September, I fell sideways into inquiry-based teaching. Since then,

  • I got a lot more honest with myself about how well my teaching actually works (ex: I do more pseudoteaching than I thought I did)
  • I fostered a classroom culture that was way more honest than in the past (by attributing authorship, letting student questions direct our activities, sharing results of regular class feedback, direct-teaching them how to respectfully disagree with the teacher, etc.  The increased honesty is where the hard lessons came from)
  • I learned that teaching 5 preps in five months, using an educational approach that I hadn’t anticipated, makes me so sleep-deprived that I am incapable of synthesizing my thoughts into readable blog posts
  • I changed my mind about a bunch of things (ex: I used to think that any student who attends class, works hard, and uses the resources available to them will complete the program.  Hold the tomatoes.)
  • I noticed a bunch of things that I hadn’t realized I didn’t know (ex: I’m not sure exactly what I want my students to capture in their class notes; there are some shop activities where I’m not completely sure what question I intend for them to answer).

It was uncomfortable and sometimes I couldn’t tell if I was “doing it right.”  In other words, I practiced what I preached.  I spent a lot of sleepless Sunday nights, worrying that I wasn’t good enough to pull this off and that I’d mess up my students’ minds, or at least their careers. (I eventually figured out how to judge that my skills, though imperfect, are up to the task.  That’s a post for another day.) Last week, my second-year students came back from work-terms with glowing reviews.  The employers wrote specifically about students’ discernment in asking significant question without needing continual reassurance, their competence in tackling unfamiliar tasks, their ability to make sense of technical text. The 2nd-year students reported feeling confident and well-prepared.  I got a visit in my office from a student who had been a vocal critic of my increasingly “weird” teaching.  He shook my hand, looked me in the eye, and told me that he appreciated how well the tasks he performed in class reflected the industry.  He has just aced the employer’s entrance test on the first try. The 1st-year students did well on their final project (an FM transmitter), becoming increasingly self-directed in developing test procedures, troubleshooting systematically, and recording their results (including migrating their lab notes from paper to Excel and Visio).  Their feedback is positive and constructive.  Here are their thoughts on what’s working well:

  • “The teaching aspects that are new to me.”
  • “Very dedicated teachings from an involved and thorough instructor.”
  • “Learning new concepts”
  • “Everything.”
  • “The skill sheets”

The only suggestion about what to change (other than “nothing”) was the balance between theory and shop time.  I agree.  In the last 5 weeks, I collapsed back into lecture mode, mostly because I was tired and couldn’t figure out what to do instead.  I have ideas for next year. So this post is my way of saying hello, and keeping track of some things I plan to write about next.  In response to some long-ago requests, I’m working on posts about

  • an example of a measurement cycle, including how I chose the questions, why they arose in the first place, and how students investigated them
  • an example of a research cycle, including topics students presented and topics I presented
  • an example of how I assessed students’ critical thinking skills, including drafts of students’ writing and the kinds of feedback I gave

There are lots of other things in the hopper but I probably need to do these first for other topics to make sense.  If you notice something that I’ve left out or skipped over, your suggestions would be very welcome, as I try to organize this into a coherent story.

Last semester, as I stumbled into inquiry-based teaching, there were times when I wanted the students to learn something specific at a specific time.  For example, how to use an ammeter without blowing the fuse.

Option 1: I make a research presentation for acceptance to the model

It wasn’t perfect, but my solution was to propose something for the model myself.  I would prepare a 3-min presentation, bring 2 sources, and ask the students to evaluate them.  In the context of dozens of student presentations, I made 5 throughout the semester, so it kept my talk ratio fairly low.

Advantages: it gave me the opportunity to show them what I expected in a presentation and in an analysis of the sources.  It also gave me the opportunity to ask them for feedback about specific things, like “making the presentation as short as possible but no shorter” or “keeping the presentation focused.”

Disadvantage: it would be difficult for students to reject my arguments (they never did).  However, they did sometimes propose to rephrase them for clarity or precision.  They used the Rubric for Assessing Reasoning to formulate about my logic.  This is certainly an improvement over what I was doing before.

Option 2: I put it on the skill sheet

I keep my skills-based grading system.  Going with the flow of student questions meant that sometimes we jumped around between units.  Occasionally I removed a skill from the list, if it was clear that the relevant outcome was being met some other way.  However, it was gratifying to realize that the things I put on the skill sheets were mostly things we ended up doing in the course of answering our questions.  In other words, the skills in the beginner course were things that beginners would actually care about while they were in the process of beginning to learn the topic.

If I needed the students to learn something specific that was measurable, I would put it on the “Shop” side of the skill sheet.

Every week during our shop period, I would write on the board the questions that had come up that week.  I would also write any skills that I wanted them to demonstrate by the end of the day.  They were always skills you would need in order to explore the questions.  If the skill wasn’t needed to explore our questions, then I didn’t need to teach it right that minute, did I.

I thought that skills-based grading generated a lot of data — so much that I couldn’t have done it without a subscription to a specially-designed software product.  I thought that cam-scanning their work to enable instant feedback generated a lot of data, and it did — I filled an 8GB memory stick this semester.  But that only kept track of which parts of my discipline’s “canon” my students knew or didn’t know.

This semester, I also needed to keep track of what they thought instead.  Also, what they were curious about.  What they had researched, what they had succeeded in adding to the model, and what they wanted to know about next.  The better I did at keeping track of this for each student, the better I was able to give feedback and ask questions that related to what they thought about, hadn’t noticed yet, or cared about.  When I finally wrestled the data into submission (about 3 weeks before the end of the term), it was in a spreadsheet and looked like this.

Click through for a bigger version

The far left column contains the questions.  I keep track of who asked them, when, and why they seemed significant.  If that question gets answered, I’ll fill in who proposed the answer, the date it was accepted, and the exact wording as accepted by the class.  See all the little Xs on the right?  I try to notice which themes are popular, and tag a question with an X for as many categories as it seems to belong to.  In the far right column, I can mark the question as active or inactive (if it has been answered, or the class has done some housecleaning and deemed it no longer relevant).

I like this approach because it requires much less work to set up than even a simple database, and works fine if I only need to make very simple queries.  For example, I can sort on the “Asked by” column to find all the questions asked by a particular student, by the date to get a chronology, by a particular topic (such as “batteries”), or by which questions are still in play.  If I want to get fancy, I can sort by “Active” and “Asked on” to find the question that’s been outstanding the longest (I think it’s “can atoms touch.”  That one nearly started a fist fight).

I try to keep the wording as faithful as possible to the way the student asked it (I can tack some notes into the blank columns at the far right if I think I will need to remind myself what they mean.  My most significant mistake at first was not keeping track of the context in which the question arose.  Significance is on the back burner of my students’ minds, but very much on the front burner of mine.

This year, the first-year class has created their own model of what electrons do in circuits.  We add ideas to the model by investigating questions that they are interested in.  The questions have included everything from “If I had added another resistor to the circuit, would that have prevented my component from smoking” to “Do electrons always travel at the speed of light?”  There are two main ways that ideas get added to the model: research or measurement.

Research

Students individually choose a question from our “question bank,” which I collect based on their comments. They research the question and evaluate the quality of the reasoning in the sources.  They present their findings to the class; the class then assesses the research and decides whether to accept it, reject it, or send it back for clarification.

Measurement

I choose some questions from the question bank that we have the equipment to take measurements about — usually questions that have come up in the past week.  For a 3-hour lab period, I typically pick 4-6 questions. Everyone takes one measurement for each question.  A question would be something like “What determines the voltage across each component: position, resistance, other?”  The questions are specific enough that they suggest a certain circuit arrangement, but students are free to choose component types and values, voltage sources, etc.  Students must keep records of their circuits and measurements; I cam-scan their records at the end of the day.  The following day, we break into 4-6 small groups (as many as there were questions).  Each group collates the measurements for their assigned questions, looks for patterns, and recommends ideas to be added to the model.

Motivating the Model

I explained that we were building the model we’d be using to predict the behaviour of circuits for the  next two years, and that on tests, I would be evaluating whether they used their model in a well-reasoned way (“You’re going to grade us based on what we say??”  They were astounded).  I cautioned them against rejecting things too quickly, since they would need as much structure as they could get.  I also cautioned them against accepting things too quickly.  If they accepted something that was contradictory or unclear, they would have to spend extra time down the road rooting it out, and rooting out any other ideas that had been accepted based on the poorly-reasoned one.

Results: Concentration, Community, and Discernment

We started the year with research presentations, since students’ early questions weren’t testable with the lab equipment we had.  Everyone had to give a short (3-min or less) verbal presentation of an idea that could be added to the model.  When they were not presenting, students completed a “Rubric for Assessing Reasoning” (see the comprehension constructor shown in this post).  There were 12 students in the class.  At the end of the presentations, each student turned in 11 rubrics.  Well holy smoke, you’ve never heard a quieter bunch of 1st-year students.  They were listening so hard you could hear a pin drop.  They shushed each other and asked the speaker to repeat things when they weren’t sure of the exact wording.  They were writing like mad.

After each presentation, we discuss it and voted on it.  For voting, they used the feedback flashcards I’d made in September.  Green means accept; red means reject; yellow means “I have a question or want something clarified”.  Consensus is required for an idea to be added to the model.  In that first week, I contributed a lot to the discussions — asking questions, pointing out possible conflicts.  Now, the students do most of that.  (Those conversations have yielded several astonishing innovations, including class training on basic facilitation techniques, a formal mechanism for figuring out who has the floor, and a demand that we sit in a circle.  I am not making this up.)

That was the beginning of our model of atomic structure and electricity.  The students have become adept at differentiating between questions that must be answered before we accept an idea, and questions that are not stopping us from accepting the presentation (those end up in the question bank, of course).

This semester, I turned over the DC Circuits course to the questions my students asked.  I started because their questions were excellent.  But I continued because I found ways to nurture the question-creation in ways that both introduced students to “thinking like a tech” and, not coincidentally, “covered” all the curriculum.  First we needed a shared purpose: to be able to predict “what electrons are doing” in any DC circuit by the end of the semester.  Next, we needed to generate questions  Here are a few examples of how it happened.

Class discussion parking lot

In the first few days of the semester, we brainstormed a list of ideas, vocab, and anything else anyone remembered about magnets, then about atoms.  I asked questions, introducing the standards for reasoning by asking “What exactly do you mean by … ” or “How much…”  or “How could idea A and idea B be true at the same time?”  Any time we reached “I don’t know” or a variety of apparently contradictory answers, I phrased it as a question and wrote it down.  This turned out to be a useful facilitation technique, to be used when students were repeating their points or losing patience with a topic. I checked with the class that I had adequately captured our confusion or conflicting points, and stored them for later.   Two days into the course we had questions like these:

  • What does it mean for an electron to be negative?  Is it the same as a magnet being negative?
  • Does space have atoms?
  • Can atoms touch?
  • We think protons are bigger than electrons because they weigh more, but do they actually take up more volume?

I continued throughout the semester to gather their questions; I periodically published the list and asked them to choose one to investigate (or, of course, propose another one).  We never ran out.

I assess their reasoning

At the beginning, I waded gradually into assessing students’ reasoning.  We started with some internet research where everyone had to find three sources that contributed to our understanding of our questions; I asked them to use Cornell paper to record what their source said on the right, their own thoughts (questions, connections to their experience, visuals, whatever) on the left, and a summary at the bottom.  (Later I did this with a Googledoc, but I went back to paper because of the helpfulness of sketches and formulas).

I collected these and wrote feedback about clarity, precision, logic, and the other criteria for assessing reasoning.  “What do you mean by this exactly?”  “How much more?”  “Does the source tell you what causes this?”  “Do you think these two sources are contradictory?”  “Have you experienced this yourself?”  I also kept track of all the questions they asked and added them to the question list.  Here’s an example, showing a quote from the source (right) and the student’s thinking (left) with my comment (bottom right).

There’s a lot of material to work with here: finding parallels between magnetic and electric fields; what the concept of ground means exactly; and an inkling of the idea that current is related to time.  I love this example because the student is working through technical text, exploring its implications, comparing them to his base knowledge, and finding some “weirdness.”  I mean, who ever heard of a magnet getting weaker because it was lying on the ground rather than sticking to the fridge?  Weirdness is high on my priority list for exploring questions.

I continued periodically to ask students to record both a published source and their thoughts/questions.  There’s something about that gaping, wide blank column on the left that makes people want to write things in it.

Students respond to my assessments

The next assignment was for students to pencil in some comments responding to my comments.  This got the ball rolling; they started to see what I was asking for.  They also loosened up their writing a bit; there’s something about writing diagonally in the corner of a page that makes people disinclined to use 25-cent words and convoluted sentence structure. Exhibit A is the same student’s response to my question:

Ok, so this student is known for flights of fancy.  But there’s even more to work with here:  air as an insulator; the idea of rapid transfers of electrons when they come in contact with certain materials — as if the electrons are trying to get away from something;   an  opportunity to talk about metaphors and what they’re good for.

This exercise also set the stage for the idea that the comments I leave on their assignments are the beginning of a conversation, not the end, and that they should read them.  Finally, it generated questions (in their responses).  I was pretty broad in my interpretation of a question.   If a student claimed that “It’s impossible for X to the same as Y,” and couldn’t substantiate it in response to my question, it would end up in the question list as “Can X be the same as Y?”

They assess a published source’s reasoning

The information the students found fell into four broad categories.  I printed copies of the results for everyone.  On the next day, students broke into four groups with each group analyzing the data for one category.  They had to summarize and evaluate for clarity, precision, lack of contradictions, etc.  I also asked them to keep track of what the group agreed on and what the group disagreed on.  As I circulated, I encouraged them to turn their disagreements into questions.

I assess their assessments

I gave groups live feedback by typing it into a document projected at the front of the room while they were working.  They quickly caught on that I was quoting them and started watching with one eye.  I lifted this group feedback model wholesale from Sam Shah, except that my positive feedback was about group behaviours that contributed to good-quality reasoning (pressing each other for clarification, noticing inferences, noticing contradictions, etc.).

We had a good conversation at the end of the class about what they thought a made a good group discussion.  They left the whiteboards with me at the end of the day.

Rubric for Assessing Reasoning

I read their submissions and wrote back.  Next day, they had to get back into their groups, consider my feedback, and decide whether to change anything.  Then I asked them to present to their classmates, who would be assessing the presentations.  I knew they would need some help “holding their thinking” so I made this “reading comprehension constructor” (à la Cris Tovani) based on our criteria for evaluating thinking: it’s a rubric for assessing reasoning.

If you look closely you’ll see that three new criteria have appeared, in the form of check boxes; they are criteria about the source overall, not about the quality of reasoning.  Is the source reviewed by experts?  Is the source recent (this forces them to begin looking for copyright dates)?  Is the source relevant to our question? I asked that they carefully consider the presentations, and accept only the points that met our criteria for good quality reasoning.  Each student filled out a rubric for each presentation.  Rubrics were due at the end of class.

Voila: more questions.

Conclusion: Attributing authorship helps

I suspect that my recording of who asked which questions is part of what makes this work (see this post by John Burk for an interesting video of John’s colleagues discussing this idea with Brian Frank).  The students know and trust that I will attribute authorship of their ideas now; it seems to make them more willing to entrust their questions to me.  They’ve started saying things like “I don’t want to reject this idea from the model, I think it’s a good starting point, but next I think we need to know more about what voltage is doing exactly that causes current.  Could you add that question to the list?”

In a previous post, I explained the thought process behind seven of my choices of standards for evaluating thinking.  They are mostly unsurprising items like clarity, precision, and logic, unpacked into student-friendly language (I hope).  The remaining one is not like the others.  When we are evaluating reasoning, I ask my students to find and evaluate the connections to their own experience and intuition.

I don’t do this because I want them to reject ideas that contradict their expectations. I also don’t do it because it’s a warm-and-fuzzy way of making things seem “personal” or because it’s a mnemonic that anchors things in your brain.  Finally, I don’t do it (anymore) as a way to elicit and stamp out their ideas.  I do it because a bunch of previously disconnected thoughts I’ve had about teaching are converging here.  I’m trying to document the convergence.

The more I ask students to evaluate how their experiences and intuitions connect to new ideas, the more I learn: about my teaching, about their thinking, and about when I should ask for experience vs. when I should ask for intuition.  Every day, I find a new reason why it’s important to develop the habit of asking this question, and of determining whether our ideas help us accomplish our purpose:

  1. because my students often can’t tell the difference between their ideas and the author’s ideas.  I find this downright alarming.  When asked, they will report that an author’s idea was also theirs “all along” (even if they contradicted it yesterday);  or, they will report that “the author said” something that is simply not there.
  2. because an ounce of perplexity is worth a pound of engagement, and our own experience is a great source of perplexity (“but wait, I thought…”)
  3. because convincing students of some counter-intuitive idea without giving them a chance to connect to its counter-intuitiveness can steal the possibility that it will ever seem cool to them
  4. because “when things are acting funny, measure the amount of funny.”   If you are not comparing new ideas to your intuition, nothing ever seems funny
  5. because failing to ask this question teaches students that what they learn in school is not connected to the rest of the world
  6. because as Cris Tovani writes in I Read It But I Don’t Get It, we need to ask “So what?”  Making connections with the text simply to get through the assignment, without asking the “So what?” of how it moves us closer to our purpose, can damage our understanding rather than strengthen it (and might be an ingredient in the “mythologizing” that Grace writes about)
  7. because “when intellectual products attain classic status [and become divorced from our own ideas,] their importance is reflexively accepted, but not fully appreciated…
  8. because I’m starting to think that well-reasoned misconceptions help us make progress toward changing the game from one of memorization to one that’s about “learning the genre” of a discipline.  I want my students to see “technician” as something they are, not something they do… and I think that that sense of participating in an identity, not just performing its tasks, is a clue to the 50% attrition rate in my program
  9. because maybe initial knowledge is a barrier to learning that must be corrected or maybe alternative conceptions are hallmarks of progress but either way, students need to talk about them… and I need to know what they are
  10. because the other day I wished I had an engineering logbook to keep track of the results of my experiments.  I haven’t wanted one of those since I left the R&D lab where I used to work.  The fact that I have some results worth keeping track of makes me more certain that I’m doing the kind of inquiring (into my students’ learning) that matters.

 

Archives

I’M READING ABOUT