You are currently browsing the category archive for the ‘confusion’ category.

Siobhan Curious inspired me to organize my thoughts so far about meta-cognition with her post “What Do Students Need to Learn About Learning.” Anyone want to suggest alternatives, additions, or improvements?

Time Management

One thing I’ve tried is to allow students to extend their due dates at will — for any reason or no reason.  The only condition is that they notify me before the end of the business day *before* the due date.  This removes the motivation to inflate or fabricate reasons — since they don’t need one.  It also promotes time management in two ways: one, it means students have to think one day ahead about what’s due.  If they start an assignment the night before it’s due and realize they can’t finish it for some reason, the extension is not available; so they get into the habit of starting things at least two days before the due date.  It’s a small improvement, but I figure it’s the logical first baby step!

The other way it promotes time management is that every student’s due dates end up being different, so they have to start keeping their own calendar — they can’t just ask a classmate, since everyone’s got custom due dates.  I can nag about the usefulness of using a calendar until the cows come home, but this provides a concrete motivation to do it.  This year I realized that my students, most of them of the generation that people complain is “always on their phones”, don’t know how to use their calendar app.  I’m thinking of incorporating this next semester — especially showing them how to keep separate “school” and “personal” calendars so they can be displayed together or individually, and also why it’s useful to track both the dates work is due, in addition to the block of time when they actually plan to work on it.

Relating Ideas To Promote Retention

My best attempt at this has been to require it on tests and assignments: “give one example of an idea we’ve learned previously that supports this one,” or “give two examples of evidence from the data sheet that support your answer.”  I accept almost any answers here, unless they’re completely unrelated to the topic, and the students’ choices help me understand how they’re thinking.

Organizing Their Notes

Two things I’ve tried are handing out dividers at the beginning of the semester, one per topic… and creating activities that require students to use data from previous weeks or months.  I try to start this immediately at the beginning of the semester, so they get in the habit of keeping things in their binders, instead of tossing them in the bottom of a locker or backpack.  The latter seems to work better than the former… although I’d like to be more intentional about helping them “file” assignments and tests in the right section of their binders when they get passed back.  This also (I hope) helps them develop methodical ways of searching through their notes for information, which I think many students are unfamiliar with because they are so used to being able to press CTRL -F.  Open-notes tests also help motivate this.

I also explicitly teach how and when to use the textbook’s table of contents vs index, and give assignments where they have to look up information in the text (or find a practise problem on a given topic), which is surprisingly hard for my first year college students!

Dealing With Failure

Interestingly, I have students who have so little experience with it that they’re not skilled in dealing with it, and others who have experienced failure so consistently that they seem to have given up even trying to deal with it.  It’s hard to help both groups at the same time.  I’m experimenting with two main activities here: the Marshmallow Challenge and How Your Brain Learns and Remembers (based on ideas similar to Carol Dweck’s “growth mindset”).

Absolute Vs Analytical Ways of Knowing

I use the Foundation for Critical Thinking’s “Miniature Guide To Critical Thinking.”  It’s short, I can afford to buy a class set, and it’s surprisingly useful.  I introduce the pieces one at a time, as they become relevant.  See p. 18 for the idea of “multi-system thinking”; it’s their way of pointing out that the distinction between “opinions” and “facts” doesn’t go far enough, because most substantive questions require us to go beyond right and wrong answers into making a well-reasoned judgment call about better and worse answers — which is different from an entirely subjective and personal opinion about preference.  I also appreciate their idea that “critical thinking” means “using criteria”, not just “criticizing.”  And when class discussions get heated or confrontational, nothing helps me keep myself and my students focused better than their “intellectual traits” (p. 16 of the little booklet, or also available online here) (my struggles, failures, and successes are somewhat documented Evaluating Thinking).

What the Mind Does While Reading

This is one of my major obsessions.  So far the most useful resources I have found are books by Chris Tovani, especially Do I Really Have to Teach Reading? and I Read It But I Don’t Get It.  Tovani is a teacher educator who describes herself as having been functionally illiterate for most of her school years.  Both books are full of concrete lesson ideas and handouts that can be photocopied.  I created some handouts that are available for others to download based on her exercises — such as the Pencil Test and the “Think-Aloud.”

Ideas About Ideas

While attempting these things, I’ve gradually learned that many of the concepts and vocabulary items about evaluating ideas are foreign to my students.  Many students don’t know words like “inference”, “definition”, “contradiction” (yes, I’m serious), or my favourite, “begging the question.”  So I’ve tried to weave these into everything we do, especially by using another Tovani-inspired technique — the “Comprehension Constructor.”  The blank handout is below, for anyone who’d like to borrow it or improve it.

To see some examples of the kinds of things students write when they do it, click through:


I heart zero

Here are some conversations that come up every year.

1. Zero Current

Student: “I tried to measure current, but I couldn’t get a reading.”

Me: “So the display was blank?”

Student: “No, it just didn’t show anything.”

(Note: Display showed 0.00)

2. Zero Resistance

Student: “We can’t solve this problem, because an insulator has no resistance.”

Me: “So it has zero ohms?”

Student: “No, it’s too high to measure.”

3. Zero Resistance, In a Different Way

Student: “In this circuit, X = 10, but we write R = 0 because the real ohms are unknown.”

(Note: The real ohms are not unknown.  The students made capacitors out of household materials last week, so they have previously explored that the plates have approx. 0 and the dielectric is considered open)

4. Zero Resistance Yet Another Way

Student: “I wrote zero ohms in my table for the resistance of the battery since there’s no way to measure it.”

What I Wonder

  • Are students thinking about zero as indicator that means “error” or “you’re using the measuring tool wrong?”  A bathroom scale might show zero if you weren’t standing on it.  A gas gauge shows zero when the car isn’t running.
  • When students say “it has none” like in example 2, what is it that there is none of? They might mean “it has no known value”, which might be true, as a opposed to “it has no resistance.”
  • Is this related to a need for more concreteness?  For example, would it help if we looked up the actual resistance of common types of insulation, or measured it with a megger?  That way we’d have a number to refer to.
  • #3 really stumps me. Is this a way of using “unknown” because they’re thinking of the dielectric as an insulator that is considered “open”, so that #3 is just a special case of #2?  Or is it unknown because the plates are considered to have 0 resistance and the dielectric is considered open, so we “don’t know” the resistance because it’s both at the same time?  The particular student who said that one finds it especially hard to express his reasoning and so couldn’t elaborate when I tried to find out where he was coming from.
  • Why does this come up so often for resistance, and sometimes for current, but I can’t think of a single example for voltage?  I suspect it’s because both resistance and current feel concrete and like real phenomena that they could visualize, so they’re more able to experiment with its meaning.  I think they’re avoiding voltage altogether (first off, it’s about energy, which is weird in the first place, and then it’s a difference of energies, which makes it even less concrete because it’s not really the amount of anything — just the difference between two amounts, and then on top of that we never get to find out what the actual energies are, only the difference between them — which makes it even more abstract and hard to think about).
  • Since this comes up over and over about measurement, is it related to seeing the meter as an opaque, incomprehensible device that might just lie to you sometimes?  If so, this might be a kind of intellectual humility, acknowledging that they don’t fully understand how the meter works.  That’s still frustrating to me though, because we spend time at the beginning of the year exploring how the meter works — so they actually do have the information to explain what inside the meter could show a 0A reading.  Maybe those initial explanations about meters aren’t concrete enough — perhaps we should build one.  Sometimes students assume explanations are metaphors when actually they’re literal causes.
  • Is it related to treating automated devices in general as “too complicated for normal people to understand”?  If that what I’m reading into the situation, it explains why I have weirdly disproportionate irritation and frustration — I’m angry about this as a social phenomenon of elitism and disempowerment, and I assess the success of my teaching partly on the degree to which I succeed in subverting it… both of which are obviously not my students’ fault.

Other Thoughts

One possibility is that they’re actually proposing an idea similar to the database meaning of “null” — something like unknown, or undefined, or “we haven’t checked yet.”

I keep suspecting that this is about a need for more symbols.  Do we need a symbol for “we don’t know”?  It should definitely not be phi, and not the null symbol — it needs to look really different from zero.  Question mark maybe?

If students are not used to school-world tasks where the best answer is “that’s not known yet” or “that’s not measurable with our equipment”, they may be in the habit of filling in the blank.  If that’s the case, having a place-holder symbol might help.

This year, I’ve really started emphasizing the idea that zero, in a measurement, really means “too low to measure”.  I’ve also experimented with guiding them to decipher the precision of their meters by asking them to record “0.00 mA” as “< 5uA”, or whatever is appropriate for their particular meter.  It helps them extend their conceptual fluency with rounding (since I am basically asking them to “unround”); it helps us talk about resolution, and it can help in our conversation about accuracy and error bars.  Similarly,  “open” really means “resistance is too high to measure” (or relatedly, too high to matter) — so we find out what their particular meter can measure and record it as “>X MOhms”.

The downfall there is they start to want to use those numbers for something.  They have many ways of thinking about the “unequal” signs and one of them is to simply make up a number that corresponds to their idea of “significantly bigger”.  For example, when solving a problem, if they’re curious about whether electrons are actually flowing through air, they may use Ohm’s law and plug in 2.5 MOhms for the resistance of air. At first I rolled with it, because it was part of a relevant, significant, and causal line of thinking.  The trouble was that I then didn’t know how to respond when they started assuming that 2.5MOhms was the actual resistance of air (any amount of air, incidentally…), and my suggestion that air might also be 2.0001 MOhms was met with resistance. (Sorry, couldn’t resist). (Ok, I’ll stop…)

I’m afraid that this is making it hard for them to troubleshoot.  Zero current, in particular, is an extremely informative number — it means the circuit is open somewhere.  That piece of information can solve your problem, if you trust that your meter is telling you a true and useful thing. But if you throw away that piece of information as nonsense, it both reduces your confidence in your measurements, and prevents you from solving the problem.

Some Responses I Have Used

“Yes, your meter is showing 0.00 because there is 0.00 A of current flowing through it.”

“Don’t discriminate against zero — it isn’t nothing, it’s something important.  You’ll hurt its feelings!”

Not helpful, I admit!  If inquiry-based learning means that “students inquire into the discipline while I inquire into their thinking”*, neither of those is happening here.

Some Ideas For Next Year

  • Everyone takes apart their meter and measures the current, voltage, and resistance of things like the current-sense resistor, the fuse, the leads…
  • Insist on more consistent use of “less than 5 uA” or “greater than 2MOhms” so that we can practise reasoning with inequalities
  • “Is it possible that there is actually 0 current flowing?  Why or why not?”
  • Other ideas?

*I stole this definition of inquiry-based learning from Brian Frank, on a blog post that I have never found again… point me to the link, someone!

Sometimes I need to have all the students in my class improve their speed or accuracy in a particular technique.  Sometimes I just need everyone to do a few practice problems for an old topic so I can see where I should start.  But I don’t have time to make (or find) the questions, and I definitely don’t have time to go through them with a fine-toothed comb.

One approach I use is to have students individually generate and grade their own problems.  They turn in the whole, graded, thing and I write back with narrative feedback.  I get what I need (formative assessment data) and they get what they need — procedural practice, pointers from me, and some practice with self-assessment.

Note: this only works for problems that can be found in the back of a textbook, complete with answers in the appendix.

Here’s the handout I use.

What I Get Out of It

The most useful thing I get out of this is the “hard” question — the one they are unable to solve.  They are not asked to complete it: they are asked to articulate what makes that question difficult or confusing.

Important Principles

  • Students choose questions that are easy, medium, and hard for them.  This means they must learn to anticipate the difficulty level of a question before attempting it.
  • If they get a question wrong, they must either troubleshoot it or solve a different one.
  • They turn in their questions clearly marked right or wrong.


  • I don’t have to grade it — just read it and make comments
  • The students get to practice looking at things they don’t fully understand and articulating a question about it
  • I get to find out what they know and what they (think they) don’t know.
  • Students can work together by sharing their strategies, but not by sharing their numbers, since everyone ends up choosing different problems.
  • It makes my expectations explicit about how they should do practice questions in general: with the book closed, page number and question number clearly marked, with the schematics copied onto the paper (“Even if there’s no schematic in the book?!” they ask incredulously — clearly the point of writing down the question is just to learn to be a good scribe, not to improve future search times), etc.

Lessons Learned

I give this assignment during class, or at least get it started during class, to reduce copying.  Once students have chosen and started their questions, they’re unlikely to want to change them.

I went looking for a resource about “growth mindset” that I could use in class, because I am trying to convince my students that asking questions helps you get smarter (i.e. understand things better).  I appreciate Carol Dweck‘s work on her website and her book, but I don’t find them

  • concise enough,
  • clear enough, or
  • at an appropriate reading level for my students.

What I found was Diana Hestwood and Linda Russel’s presentation about “How Your Brain Learns and Remembers.”  The authors give permission for non-profit use by individual teachers.  It’s not perfect (I edited out the heading that says “You are naturally smart” … apologies to the authors) and it’s not completely in tune with some  of the neuroscience research I am hearing about lately, but it meets my criteria (above) and got the students thinking and talking.

Despite her warning that it’s not intended to stand on its own and that the teacher should lead a discussion, I’d rather poke my eyes out than stand in front of the group while reading full paragraphs off of slides. I found the full-sentence, full-paragraph “presentation” to work on its own just fine (CLARIFIED: I removed all the slides with yellow backgrounds, and ended at slide 48).  I printed it, gave it to the students, and asked them to turn in their responses to the questions embedded in it.  I’ll report back to them with some conversational feedback on their individual papers and some class time for people to raise their issues and questions — as usual, discussion after the students have tangled with the ideas a bit.

The students really went for it.  They turned in answers that were in their own words (a tough ask for this group) and full of inferences, as well as some personal revelations about their own (good and bad) learning experiences.  There were few questions (the presentation isn’t exactly intended to elicit them) but lots of positive buzz.  About half the class stayed late, into coffee break, so they could keep writing about their opinions of this way of thinking.  Several told me that “this was actually interesting!”  (*laugh*)  I also got one “I’m going to show this to my girlfriend” and one, not-quite-accusatory but clearly upset “I wish someone had told me this a long time ago.”  (*gulp*)

I found a lot to like in this presentation.  It’s a non-threatening presentation of some material that could easily become heavily technical and intimidating.  It’s short, and it’s got some humour.  It’s got TONS of points of comparison for circuits, electronic signal theory, even semiconductors (not a co-incidence, obviously).  Most importantly, it allows students to quickly develop causal thinking (e.g. practice causes synapses to widen).

Last year I found out in February that my students couldn’t consistently distinguish between a cause and a definition, and trying to promote that distinction while they were overloaded with circuit theory was just too much.  So this year I created a unit called “Thinking Like a Technician,” in which I introduced the thinking skills we would use in the context of everyday examples. Here’s the skill sheet — use the “full screen” button for a bigger and/or downloadable version.

It helped a bit, but meant that we spend a couple of weeks talking about roller coasters, cars, and musical instruments.  Next year, this is what we’ll use instead.  It’ll give us some shared vocabulary for talking about learning and improving — including why things that feel “easy” don’t always help, why things that feel “confusing” don’t mean you’re stupid, why “feeling” like you know it isn’t a good test of whether you can do it, and why I don’t accept “reviewing your notes” as one of the things you did to improve when you applied for reassessment.

But this will also give us a rich example of what a “model” is, why they are necessarily incomplete and at least a bit abstracted, and how they can help us make judgement calls.  Last year, I started talking about the “human brain model” around this time of the year (during a discussion of why “I’ll just remember the due date for that assignment” is not a strong inference).  That was the earliest I felt I could use the word “model” and have them know what I meant — they were familiar enough with the “circuits and electrons model” to understand what a model was and what it was for.  Next year I hope to use this tool to do it the other way around.

Last February, I had a conversation with my first-year students that changed me.

On quizzes, I had been asking questions about what physically caused this or that.  The responses had a weird random quality that I couldn’t figure out.  On a hunch, I drew a four-column table on the board, like this:

Topic: Voltage






I gave the students 15 minutes to write whatever they could think of.

I collect the answer for “cause” a write them all down.  Nine out of ten students said that a difference of electrical energy levels causes voltage.  This is roughly like saying that car crashes are caused by automobiles colliding.

Me: Hm.  Folks, that’s what I would consider a “definition.”  Voltage is just a fancy word that means “difference of electrical energy levels” — it’s like saying the same thing twice.  Since they’re the same idea, one can’t cause the other — it’s like saying that voltage causes itself.

Student: so what causes voltage — is it current times resistance?

Me: No, formulas don’t cause things to happen.  They might tell you some information about cause, and they might not, depending on the formula, but think about it this way.  Before Mr. Ohm developed that formula, did voltage not exist?  Clearly, nature doesn’t wait around for someone to invent the formula.  Things in nature somehow happen whether we calculate them or not.  One thing that can cause voltage is the chemical reaction inside a battery.

Other student: Oh! So, that means voltage causes current!

Me: Yes, that’s an example of a physical cause. [Trying not to hyperventilate.  Remember, it’s FEBRUARY.  We theoretically learned this in September.]

Me: So, who thinks they were able to write a definition?

Students: [explode is a storm of expostulation.  Excerpts include] “Are you kidding?” “That’s impossible.” “I’d have to write a book!”  “That would take forever!”

Me: [mouth agape]  What do you mean?  Definitions are short little things, like in dictionaries. [Grim realization dawns.]  You use dictionaries, right?

Students: [some shake heads, some just give blank looks]

Me: Oh god.  Ok.  Um.  Why do you say it would take forever?

Student: How could I write everything about voltage?  I’d have to write for years.

Me: Oh.  Ok.  A definition isn’t a complete story of everything humanly known about a topic.  A definition is… Oh jeez.  Now I have to define definition. [racking brain, settling on “necessary and sufficient condition,” now needing to find a way to say that without using those words.]  Ok, let’s work with this for now: A definition is when you can say, “Voltage means ________; Whenever you have ___________, that means you have voltage.”

Students: [furrowed brows, looking amazed]

Me: So, let’s test that idea from earlier.  Does voltage mean a difference in electrical energy levels? [Students nod]  Ok, whenever you have a difference in electrical energy levels, does that mean there is voltage? [Students nod] Ok, then that’s a definition.

Third student: So, you flop it back on itself and see if it’s still true?

Me: Yep. [“Flopping it back on itself” is still what I call this process in class.] By the way, the giant pile of things you know about voltage, that could maybe go in the “characteristics” column.  That column could go on for a very long time.  But cause and definition should be really short, probably a sentence.

Students: [Silent, looking stunned]

Me: I think that’s enough for today.  I need to go get drunk.

Ok, I didn’t say that last part.

When I realized that my students had lumped a bunch of not-very-compatible things together under “cause,” other things started to make sense.  I’ve often had strange conversations with students about troubleshooting — lots of frustration and misunderstanding on both sides.  The fundamental question of troubleshooting is “what could cause that,” so if their concept of cause is fuzzy, the process must seem magical.

I also realized that my students did not consistently distinguish between “what made you think that” and “what made that happen.”  Both are questions about cause — one about the cause of our thinking or conclusions, and one about the physical cause of phenomena.

Finally, it made me think about the times when I hear people talk as though things have emotions and free will — especially high-tech products like computers are accused of “having a bad day” or “refusing to work.”  Obviously people say things like that as a joke, but it’s got me thinking, how often do my students act as though they actually think that inanimate objects make choices?  I need a name for this — it’s not magical thinking because my students are not acting as though “holding your tongue the right way” causes voltage.  They are, instead, acting as though voltage causes itself.  It seems like an ill-considered or unconscious kind of animism. I don’t want to insult thoughtful and intentional animistic traditions by lumping them in together, but I don’t know what else to call it.

Needless to say, this year I explicitly taught the class what I meant by “physical cause” at the beginning of the year.  I added a metacognition unit to the DC Circuits course called “Technical Thinking” (a close relative of the “technical reading” I proposed over a year ago, which I gradually realized I wanted students to do whether they were reading, listening, watching, or brushing their teeth).  Coming soon.

How I got my students to read the text before class: have them do their reading during class.

Then, the next day, I can lead a discussion among a group of people who have all tangled with the text.

It’s not transformative educational design, but it’s an improvement, with these advantages:

  1. It dramatically reduces the amount of time I spend lecturing (a.k.a. reading the students the textbook), so there’s no net gain or loss of class time.
  2. The students are filling in the standard comprehension constructor that I use for everything — assessing the author’s reasoning on a rubric.  That means they know exactly what sense-making I am asking them to engage in, and what the purpose of their reading is.
  3. When they finish reading, they hand in the assessments to me, I read them, and prepare to answer their questions for next class.  That means I’m answering the exact questions they’re wondering about — not the questions they’ve already figured out or haven’t noticed yet.
  4. Knowing that I will address their questions provides an incentive to actually ask them.  It’s not good enough to care what they think if I don’t put it into action in a way that’s actually convincing to my audience.
  5. Even in a classroom of 20 people, each person gets an individualized pace.
  6. I am free to walk around answering questions, questioning answers, and supporting those who are struggling.
  7. We’re using a remarkable technology that allows students to think at their own pace, pause as often/long as they like, rewind and repeat something as many times as they like, and (unlike videos or podcasts) remains intelligible even when skipping forward or going in slow-mo.  This amazing technology even detects when your eyes stray from it, and immediately stops sending words to your brain until your attention returns.  Its battery life is beyond compare, it boots instantly, weights less than an iPod nano, can be easily annotated (even supports multi-touch), and with the right software, can be converted from visual to auditory mode…

It’s a little bit JITT and a little bit “flipped-classroom” but without the “outside of class” part.

I often give a combination of reading materials: the original textbook source, maybe another tertiary source for comparison — e.g. a Wikipedia excerpt, then my summary and interpretation of the sources, and the inferences that I think follow from the sources.  It’s pretty similar to what I would say if I was lecturing.  I write the summaries in an informal tone intended to start a conversation.  Here’s an example:

And here’s the kind of feedback my students write to me (you’ll see my comments back to them in there too).


Highlights of student feedback:

Noticing connections to earlier learning

When I read about finite bandwidth, it seemed like something I should have already noticed — that amps have a limit to their bandwidth and it’s not infinite


When vout tries to drop, less opposing voltage is fed back to the inverting input, therefore v2 increases and compensates for the decrease in Avol

Noticing confusion or contradiction

What do f2(OL) and Av(OL) stand for?

I’m still not sure what slew-induced distortion is.

I don’t know how to make sense of the f2 = funity/Av(CL).  Is f2 the bandwidth?

In [other instructor]’s course, we built an audio monitor, and we used an op amp.  We used a somewhat low frequency (1 KHz), and we still got a gain of 22.2  If I use the equation, the bandwidth would be 45Hz?  Does this mean I can only go from 955 Hz to 1045 Hz to get a gain of 22.2?

Asking for greater precision

What is the capacitance of the internal capacitor?

Is this a “flipped classroom”?

One point that stuck with me about many “flipped classroom” conversations is designing the process so that student do the low-cognitive-load activities when they’re home or alone (watching videos, listening to podcasts) and the high-cognitive-load activities when they’re in class, surrounded by supportive peers and an experienced instructor.

This seems like a logical argument.  The trouble is that reading technical material is a high-cognitive-load activity for most of my students.  Listening to technical material is just as high-demand… with the disadvantage that if I speak it, it will be at the wrong pace for probably everyone.  The feedback above is a giant improvement over the results I got two years ago, when second year students who read the textbook would claim to be “confused” by “all of it,” or at best would pick out from the text a few bits of trivia while ignoring the most significant ideas.

The conclusion follows: have them read it in class, where I can support them.

The author of Gas Station Without Pumps has posted this thought-provoking list of technician-level skills every engineer should have:

  1. Reading voltage, current, and resistance with a multimeter.
  2. Using an oscilloscope to view time-varying signals:
    • Matching scope probe to input of scope.
    • Adjusting time-base.
    • Adjusting voltage scale.
    • Using triggering.
    • Reading approximate frequency from display.
    • Measuring time (either pulse width or time between edges on different channels)
  3. Using a bench power supply.
  4. Using a signal generator to generate sine waves and square waves.  Hmm, only the salinity conductance meter uses an AC signal so far—I may have to think of some other project-like labs that need the signal generator.  Perhaps we should have them do some capacitance measurements with a bridge circuit before building a capacitance touch sensor.
  5. Using a microprocessor with A/D conversion to record data from sensors.
  6. Handling ICs without frying them through static electricity.
  7. Using a breadboard to prototype circuits.
  8. Soldering through-hole components to a PC board.  (I think that surface-mount components are beyond the scope of the class, and freeform soldering without a board is too “arty” for an engineering class.)

I really like this course-design approach, and I think it will yield a very interesting, engaging course.

I started thinking out loud about the kinds of conceptual difficulties I’ve noticed and assessments I use.  When I realized it was turning into yet anther one of my marathon comments, I thought I’d open up the conversation over here.

1. Using a Multimeter

When teaching students how to use meters, I’ve found it interesting and conceptually useful for them to use their meters to measure other meters.  For example, use the ohmmeter to measure the input resistance of the voltmeter, or use the ammeter to measure the output current of the diode checking function.  It gets students thinking about what the meters do, helps them get a sense for the differences between meters (especially if you have a number of makes and models available), and can help them build their judgement about when, for example, a current-sense resistor’s contribution to a series circuit can no longer be ignored.

It makes for useful test questions as well: draw a meter measuring another meter, and have students justify their predictions of what each meter will read.

2.  Using an Oscilloscope

The trigger function is difficult for a lot of my students to make sense of.  This becomes evident when they make a measurement on channel 1, then make another measurement on channel 1, then infer the phase relationship between two signals that were not measured simultaneously.  This also makes a useful test question — describe this scenatio, and ask students to explain specifically why the conclusion is not valid.

I’ll also be curious to know if the students are able to relate the techniques for vector addition to the reality of phase shift in the time domain, including the apparently illogical concept that in a series RC circuit, the resistor’s voltage can lead the supply’s. (Where did the resistor get that voltage before the supply turned on?  would be the type of frustrated question my students would be upset about.)  Although introducing the concept of start-up transients seems like it should increase cognitive load, I find that my students welcome it as a way to resolve this apparent contradiction.  This is easier, of course, if you have storage scopes or (better yet) simulation software.

In case it’s useful to anyone to have an electronic copy of an “oscilloscope grid” (for including in test questions, etc.), here’s one I made. (Whoops, upload problems.  Will add it here as soon as the upload succeeds).

When we start making a lot of use of the oscilloscope, that’s when the headaches start to flare up about “what ground is exactly, anyway.”  Lots of fruitful discussions are possible; what does the scope’s ground clip mean if the the scope is plugged into an isolation transformer?  (Note, some isolation transformers isolate the grounding conductor, others don’t.)  What happens when two probes have their ground clips in different places?  (This is another favourite test question of mine: what is the voltage across component X, where X is shorted out by scope ground clips).

What does AC coupling do, exactly?  Why would you use it — why not just adjust the volts per division?  Asking them to measure the magnitude of the ripple on a DC supply can help them make sense of this.  My students also often have trouble being confident of the difference between moving the display level on the scope and adding DC offset on the signal generator.

3.  Using a Bench Power Supply

This is fairly straightforward, except for current limiting (especially on a supply where the current limit knob is not graduated, or maybe even labelled in any way).  I find it useful for students to be able to choose a replacement fuse (and shop for it on a supplier’s website).  This apparently simple procedure can help students grapple with the meaning of the distinction between voltage and current.  For beginners, it is counter-intuitive to imagine that there is voltage across an open fuse, even though there is no current.

4. Using a Signal Generator

Measuring things in a bridge circuit is another conceptually useful experience; I use it to motivate Thevenin’s theorem, since a bridge circuit has no components in series nor in parallel, making it resistant to simple circuit-solving strategies.

Other uses of a signal generator: if applicable, you could have your students perform a frequency sweep of something.  This can yield interesting insights, like noticing that, due to stray capacitance, high-pass filters are actually band-pass filters.

8.  Soldering

Soldering well, and accurately inspecting soldering, are great skills to have.  Surface-mount components might not be out of the question; if you want to introduce them, it’s not much harder to solder a 1206 chip resistor than a through-hole component, and can reasonably be done with a regular iron.  Knowing the difference between lead and lead-free solder might be useful too, especially as it relates to reliability and disposability.

I go back and forth about using perf-board.  On one hand it’s great for cheap soldering practice.  On the other hand, the lack of solder mask makes it very difficult for beginners to make tidy joints, with solder running down the lengths of the traces.

I’ll probably keep using this post as a catalogue of common difficulties.  If anyone can think of others (or has suggestions of other technician-level skills that engineers should have), I’d be curious to hear them.

My inquiry-based experiments forced me to face something I hadn’t considered: my lack of confidence in the power of reasoning.*  I spent a lot of time worrying that my students would build some elaborately misconstrued model that would hobble them forever. But you know what?  If you measure carefully, and read carefully, and evaluate your sources carefully, while attending to clarity, precision, causality and coherence, you come up with a decent model.  One that makes decent predictions for the circumstances under test, and points to significant questions.

Did I really believe that rigorous, good quality thinking would lead to hopelessly jumbled conclusions?  Apparently I did, because this realization felt surprising and unfamiliar.  Which surprised the heck out of me.  If I did believe that good quality thinking led to poor-quality conclusions (in other words, conclusions with no predictive or generative power), where exactly did I think good-quality conclusions came from?  Luck?  Delivered by a stork?  I mean, honestly.

If I was my student, I would challenge me to explain my “before-thinking” and “after-thinking.”  Like my students, I find myself unable to explain my “before-thinking.”  The best I can do is to say that my ideas about reasoning were unclear, unexamined, and jumbled up with my ideas about “talent” or “instinct” or something like that.  Yup — because if two people use equally well-reasoned thinking but one draws strong conclusions and the other draws weak conclusions, the difference must be an inherent “ability to be right” that one person has and the other person doesn’t.  *sigh* My inner “voice of Carol Dweck” is giving me some stern formative feedback as we speak.

Jason Buell breaks it down for me in a comment:

Eventually…I expect that they’ll get to the “right” answer. At least at my level, they don’t get into anything subtle enough that a mountain of evidence can be explained equally well by different models.

If they don’t get there eventually, either I haven’t done my job asking the right questions or they’re fitting their evidence to their conclusion.

Last year, I worried a lot that my teaching wasn’t strong enough to pull this off — by which I mean, I worried that my content knowledge of electronics and my process knowledge of ed theory wasn’t strong enough.  And you know?  They’re not — there’s a lot I don’t know.

But for this purpose, that’s not what I need.  I have a more than strong enough grasp of the content to notice when someone is being clear and precise, whether they are using magical thinking or causal thinking, begging the question, or being self-contradictory. And I have the group facilitation skills to keep the conversation focussed on those ideas.  Noticing that helped me sleep a lot better.

A slight difference from Jason’s note above: I don’t expect my students to get to the “right” (i.e. canonical) answer.  The textbook we use teaches the Bohr model, not quantum physics. If they go beyond that, great.  If they don’t, but come up with something that allows them to make predictions within 5-10% of their measurements, draw logical inferences about what’s wrong with a broken machine, and it’s clear, precise, and internally consistent, they’ll be great at their jobs.  And, it turns out, there is a limited number of possible models that satisfy those criteria, most of which have probably been canonical sometime during the last 90 years.  I don’t care if they settle on Bohr or Pauli.  I care that they develop some confidence in reasoning.  I care that they strengthen their thinking enough to justifiably have confidence in their reasoning, specifically.

* For the concept of “confidence in reasoning,” I’m indebted to the Foundation for Critical Thinking, which writes about it as one of their Valuable Intellectual Traits.

The game field of infinite moves

Frank Noschese just posed some questions about “just trying something” in problem-solving, and why students seem to do it intuitively with video games but experience “problem-solving paralysis” in physics.  When I started writing my second long-ish comment I realized I’m preoccupied with this, and decided to post it here.

What if part of the difference is students’ reliance on brute force approaches?

In a game, which is a human-designed environment, there are a finite number of possible moves.  And if you think of typical gameplay mechanics, that number is often 3-4.  Run left, run right, jump.  Run right, jump, shoot.   Even if there are 10, they’re finite and predictable: if you run from here and jump from exactly this point, you will always end up at exactly that point.  They’re also largely repetitive from game to game.  No matter how weird the situation in which you find yourself, you know the solution is some permutation of run, jump, shoot.  If you keep trying you will eventually exhaust all the approaches.  It is possible to explore every point on the game field and try every move at every point — the brute force approach (whether this is necessary or even desirable is immaterial to my point).

In nature, being as it is a non-human-designed environment, there is an arbitrarily large number of possible moves.  If students surmise that “just trying things until something works” could take years and still might not exhaust all the approaches, well, they’re right.  In fact, this is an insight into science that we probably don’t give them enough credit for.

Now, realistically, they also know that their teacher is not demanding something impossible.  But being asked to choose from among infinite options, and not knowing how long you’re going to be expected to keep doing that, must make you feel pretty powerless.  I suspect that some students experience a physics experiment as an infinite playing field with infinite moves, of which every point must be explored.  Concluding that that’s pointless or impossible is, frankly, valid.  The problem here isn’t that they’re not applying their game-playing strategies to science; the problem is that they are. Other conclusions that would follow:

  • If there are infinite equally likely options, then whether you “win” depends on luck.  There is no point trying to get better at this since it is uncontrollable.
  • People who regularly win at an uncontrollable game must have some kind of  magic power (“smartness”) that is not available to others.

And yet, those of us on the other side of the lesson plan do walk into those kinds of situations.  We find them fun and challenging.   When I think about why I do, it’s because I’m sure of two things:

  • any failure at all will generate more information than I have
  • any new information will allow me to make better quality inferences about what to do next

I don’t experience the game space as an infinite playing field of which each point must be explored.  I experience it as an infinite playing field where it’s (almost) always possible to play “warmer-colder.”  I mine my failures for information about whether I’m getting closer to or farther away from the solution.  I’m comfortable with the idea that I will spend my time getting less wrong.  Since all failures contain this information, the process of attempting an experiment generally allows me to constrain it down to a manageable level.

My willingness to engage with these types of problems depends on a skill (extracting constraint info from failures), a belief (it is almost always possible to do this), and an attitude (“less wrong” is an honourable process that is worth being proud of, not an indictment of my intelligence) that I think my students don’t have.

Richard Louv makes a related point in Last Child in the Woods: Saving Our Children From Nature-Deficit Disorder (my review and some quotes here).  He suggests that there are specific advantages to unstructured outdoor play that are not available otherwise — distinct from the advantages that are available from design-y play structures or in highly-interpreted walks on groomed trails.  Unstructured play brings us face to face with infinite possibility.  Maybe it builds some comfort and helps us develop mental and emotional strategies for not being immobilized by it?

I’m not sure how to check, and if I could, I’m not sure I’d know what to do about it.  I guess I’ll just try something, figure out a way to tell if it made things better or worse, then use that information to improve…

I’m on a jag about what confusion is and whether it’s necessary for learning.  My latest Gordian knot is about how confusion relates to pseudoteaching.

It seems that some condition of readiness has to happen before students can internalize an idea.  Obviously they will need some background knowledge, and basics like enough sleep, etc.  But even when my students have the material and social and intellectual conditions for learning, it often seems like there’s something missing.  To improve my ability to promote that readiness, I have to figure out what the heck it is.  I’m wondering if confusion is part of the answer.

Dan Goldner writes that students must have “prepared a space in their brain for new knowledge to fit into” — that they must have found some questions that they care about.

Grace points to the need for conflict in a good story.  She advocates creating a non-threatening “knowledge gap” using either cognitive dissonance or curiosity.

Dan Meyer, obviously, has made it an art form.  He calls it “perplexity” and distinguishes it from confusion (or sometimes describes it as a highly fruitful kind of confusion). If I’m reading it right, perplexity = conflict + caring about the outcome.

Rhett Allain has a great post about the “swamp of confusion” (go look — the map illustration is worth it).  He points out that a lifetime of pseudoteaching can convince students that working through confusion is impossible, or that teachers design courses to go around confusion, so that if you feel confused, either the teacher is incompetent, or you did something wrong.  He also pulls out some of the assumptions about “smartness” that people often hold about confusion: “If this IS indeed the way to go, I must be dumb or I wouldn’t be confused.”

Finally, the word “confusion” comes up in Derek Muller’s points about using videos to present misconceptions about science (videos that explained the “right answers” were clear but ineffective; videos that included common misconceptions were confusing but effective).

What I get in my classroom, which often gets called confusion, is conflict + anger.  Or possibly conflict + fear, or conflict + not caring (it’s possible that “not caring” is made out of anger, fear, and/or fatigue).  Just a guess: students get angry when they think I’ve created conflict that is unnecessary, or when they think I’ve created it carelessly.  These are worth thinking about.  Conflict can be threatening or exhausting.  Have I created the right conflict?  Is my specific method of creating conflict going to improve our learning, or did I use videos/whiteboards/particle accelerators because I think they’re fun and cool and make me look like a “with it” teacher?

Given that my students use the word “confusion” for a lot of situations where the next move is not immediately clear, I bet they would call all of these things confusion.

Which ones encourage learning?  Are any of them necessary?  Next year, I think I will ask students to make a note in their skills folder (portfolio-like thing with loose-leaf in it) to record confusions, so we can get a better grip on it.

In the meantime, I’m not having trouble with intellectual conflict.  By all the accounts above, the conflict is not just an inevitable side-effect but one of the main components of learning.  We’ve got lots of it to go around, and I hope that opening a conversation about it earlier in the semester will help students understand it as part of learning. Bringing to light our conflicts is part of what allows us to transform them into new understanding.

That leaves me with the “non-threatening” part, the “caring enough about the outcome to want to resolve it” part, and the “skills for dealing with it” part.