You are currently browsing the category archive for the ‘confusion’ category.
Sometimes I need to have all the students in my class improve their speed or accuracy in a particular technique. Sometimes I just need everyone to do a few practice problems for an old topic so I can see where I should start. But I don’t have time to make (or find) the questions, and I definitely don’t have time to go through them with a fine-toothed comb.
One approach I use is to have students individually generate and grade their own problems. They turn in the whole, graded, thing and I write back with narrative feedback. I get what I need (formative assessment data) and they get what they need — procedural practice, pointers from me, and some practice with self-assessment.
Note: this only works for problems that can be found in the back of a textbook, complete with answers in the appendix.
Here’s the handout I use.
What I Get Out of It
The most useful thing I get out of this is the “hard” question — the one they are unable to solve. They are not asked to complete it: they are asked to articulate what makes that question difficult or confusing.
- Students choose questions that are easy, medium, and hard for them. This means they must learn to anticipate the difficulty level of a question before attempting it.
- If they get a question wrong, they must either troubleshoot it or solve a different one.
- They turn in their questions clearly marked right or wrong.
- I don’t have to grade it — just read it and make comments
- The students get to practice looking at things they don’t fully understand and articulating a question about it
- I get to find out what they know and what they (think they) don’t know.
- Students can work together by sharing their strategies, but not by sharing their numbers, since everyone ends up choosing different problems.
- It makes my expectations explicit about how they should do practice questions in general: with the book closed, page number and question number clearly marked, with the schematics copied onto the paper (“Even if there’s no schematic in the book?!” they ask incredulously — clearly the point of writing down the question is just to learn to be a good scribe, not to improve future search times), etc.
I give this assignment during class, or at least get it started during class, to reduce copying. Once students have chosen and started their questions, they’re unlikely to want to change them.
I went looking for a resource about “growth mindset” that I could use in class, because I am trying to convince my students that asking questions helps you get smarter (i.e. understand things better). I appreciate Carol Dweck‘s work on her website and her book, but I don’t find them
- concise enough,
- clear enough, or
- at an appropriate reading level for my students.
What I found was Diana Hestwood and Linda Russel’s presentation about “How Your Brain Learns and Remembers.” The authors give permission for non-profit use by individual teachers. It’s not perfect (I edited out the heading that says “You are naturally smart” … apologies to the authors) and it’s not completely in tune with some of the neuroscience research I am hearing about lately, but it meets my criteria (above) and got the students thinking and talking.
Despite her warning that it’s not intended to stand on its own and that the teacher should lead a discussion, I’d rather poke my eyes out than stand in front of the group while reading full paragraphs off of slides. I found the full-sentence, full-paragraph “presentation” to work on its own just fine (CLARIFIED: I removed all the slides with yellow backgrounds, and ended at slide 48). I printed it, gave it to the students, and asked them to turn in their responses to the questions embedded in it. I’ll report back to them with some conversational feedback on their individual papers and some class time for people to raise their issues and questions — as usual, discussion after the students have tangled with the ideas a bit.
The students really went for it. They turned in answers that were in their own words (a tough ask for this group) and full of inferences, as well as some personal revelations about their own (good and bad) learning experiences. There were few questions (the presentation isn’t exactly intended to elicit them) but lots of positive buzz. About half the class stayed late, into coffee break, so they could keep writing about their opinions of this way of thinking. Several told me that “this was actually interesting!” (*laugh*) I also got one “I’m going to show this to my girlfriend” and one, not-quite-accusatory but clearly upset “I wish someone had told me this a long time ago.” (*gulp*)
I found a lot to like in this presentation. It’s a non-threatening presentation of some material that could easily become heavily technical and intimidating. It’s short, and it’s got some humour. It’s got TONS of points of comparison for circuits, electronic signal theory, even semiconductors (not a co-incidence, obviously). Most importantly, it allows students to quickly develop causal thinking (e.g. practice causes synapses to widen).
Last year I found out in February that my students couldn’t consistently distinguish between a cause and a definition, and trying to promote that distinction while they were overloaded with circuit theory was just too much. So this year I created a unit called “Thinking Like a Technician,” in which I introduced the thinking skills we would use in the context of everyday examples. Here’s the skill sheet — use the “full screen” button for a bigger and/or downloadable version.
It helped a bit, but meant that we spend a couple of weeks talking about roller coasters, cars, and musical instruments. Next year, this is what we’ll use instead. It’ll give us some shared vocabulary for talking about learning and improving — including why things that feel “easy” don’t always help, why things that feel “confusing” don’t mean you’re stupid, why “feeling” like you know it isn’t a good test of whether you can do it, and why I don’t accept “reviewing your notes” as one of the things you did to improve when you applied for reassessment.
But this will also give us a rich example of what a “model” is, why they are necessarily incomplete and at least a bit abstracted, and how they can help us make judgement calls. Last year, I started talking about the “human brain model” around this time of the year (during a discussion of why “I’ll just remember the due date for that assignment” is not a strong inference). That was the earliest I felt I could use the word “model” and have them know what I meant — they were familiar enough with the “circuits and electrons model” to understand what a model was and what it was for. Next year I hope to use this tool to do it the other way around.
Last February, I had a conversation with my first-year students that changed me.
On quizzes, I had been asking questions about what physically caused this or that. The responses had a weird random quality that I couldn’t figure out. On a hunch, I drew a four-column table on the board, like this:
I gave the students 15 minutes to write whatever they could think of.
I collect the answer for “cause” a write them all down. Nine out of ten students said that a difference of electrical energy levels causes voltage. This is roughly like saying that car crashes are caused by automobiles colliding.
Me: Hm. Folks, that’s what I would consider a “definition.” Voltage is just a fancy word that means “difference of electrical energy levels” — it’s like saying the same thing twice. Since they’re the same idea, one can’t cause the other — it’s like saying that voltage causes itself.
Student: so what causes voltage — is it current times resistance?
Me: No, formulas don’t cause things to happen. They might tell you some information about cause, and they might not, depending on the formula, but think about it this way. Before Mr. Ohm developed that formula, did voltage not exist? Clearly, nature doesn’t wait around for someone to invent the formula. Things in nature somehow happen whether we calculate them or not. One thing that can cause voltage is the chemical reaction inside a battery.
Other student: Oh! So, that means voltage causes current!
Me: Yes, that’s an example of a physical cause. [Trying not to hyperventilate. Remember, it's FEBRUARY. We theoretically learned this in September.]
Me: So, who thinks they were able to write a definition?
Students: [explode is a storm of expostulation. Excerpts include] “Are you kidding?” “That’s impossible.” “I’d have to write a book!” “That would take forever!”
Me: [mouth agape] What do you mean? Definitions are short little things, like in dictionaries. [Grim realization dawns.] You use dictionaries, right?
Students: [some shake heads, some just give blank looks]
Me: Oh god. Ok. Um. Why do you say it would take forever?
Student: How could I write everything about voltage? I’d have to write for years.
Me: Oh. Ok. A definition isn’t a complete story of everything humanly known about a topic. A definition is… Oh jeez. Now I have to define definition. [racking brain, settling on "necessary and sufficient condition," now needing to find a way to say that without using those words.] Ok, let’s work with this for now: A definition is when you can say, “Voltage means ________; Whenever you have ___________, that means you have voltage.”
Students: [furrowed brows, looking amazed]
Me: So, let’s test that idea from earlier. Does voltage mean a difference in electrical energy levels? [Students nod] Ok, whenever you have a difference in electrical energy levels, does that mean there is voltage? [Students nod] Ok, then that’s a definition.
Third student: So, you flop it back on itself and see if it’s still true?
Me: Yep. ["Flopping it back on itself" is still what I call this process in class.] By the way, the giant pile of things you know about voltage, that could maybe go in the “characteristics” column. That column could go on for a very long time. But cause and definition should be really short, probably a sentence.
Students: [Silent, looking stunned]
Me: I think that’s enough for today. I need to go get drunk.
Ok, I didn’t say that last part.
When I realized that my students had lumped a bunch of not-very-compatible things together under “cause,” other things started to make sense. I’ve often had strange conversations with students about troubleshooting — lots of frustration and misunderstanding on both sides. The fundamental question of troubleshooting is “what could cause that,” so if their concept of cause is fuzzy, the process must seem magical.
I also realized that my students did not consistently distinguish between “what made you think that” and “what made that happen.” Both are questions about cause — one about the cause of our thinking or conclusions, and one about the physical cause of phenomena.
Finally, it made me think about the times when I hear people talk as though things have emotions and free will — especially high-tech products like computers are accused of “having a bad day” or “refusing to work.” Obviously people say things like that as a joke, but it’s got me thinking, how often do my students act as though they actually think that inanimate objects make choices? I need a name for this — it’s not magical thinking because my students are not acting as though “holding your tongue the right way” causes voltage. They are, instead, acting as though voltage causes itself. It seems like an ill-considered or unconscious kind of animism. I don’t want to insult thoughtful and intentional animistic traditions by lumping them in together, but I don’t know what else to call it.
Needless to say, this year I explicitly taught the class what I meant by “physical cause” at the beginning of the year. I added a metacognition unit to the DC Circuits course called “Technical Thinking” (a close relative of the “technical reading” I proposed over a year ago, which I gradually realized I wanted students to do whether they were reading, listening, watching, or brushing their teeth). Coming soon.
How I got my students to read the text before class: have them do their reading during class.
Then, the next day, I can lead a discussion among a group of people who have all tangled with the text.
It’s not transformative educational design, but it’s an improvement, with these advantages:
- It dramatically reduces the amount of time I spend lecturing (a.k.a. reading the students the textbook), so there’s no net gain or loss of class time.
- The students are filling in the standard comprehension constructor that I use for everything — assessing the author’s reasoning on a rubric. That means they know exactly what sense-making I am asking them to engage in, and what the purpose of their reading is.
- When they finish reading, they hand in the assessments to me, I read them, and prepare to answer their questions for next class. That means I’m answering the exact questions they’re wondering about — not the questions they’ve already figured out or haven’t noticed yet.
- Knowing that I will address their questions provides an incentive to actually ask them. It’s not good enough to care what they think if I don’t put it into action in a way that’s actually convincing to my audience.
- Even in a classroom of 20 people, each person gets an individualized pace.
- I am free to walk around answering questions, questioning answers, and supporting those who are struggling.
- We’re using a remarkable technology that allows students to think at their own pace, pause as often/long as they like, rewind and repeat something as many times as they like, and (unlike videos or podcasts) remains intelligible even when skipping forward or going in slow-mo. This amazing technology even detects when your eyes stray from it, and immediately stops sending words to your brain until your attention returns. Its battery life is beyond compare, it boots instantly, weights less than an iPod nano, can be easily annotated (even supports multi-touch), and with the right software, can be converted from visual to auditory mode…
It’s a little bit JITT and a little bit “flipped-classroom” but without the “outside of class” part.
I often give a combination of reading materials: the original textbook source, maybe another tertiary source for comparison — e.g. a Wikipedia excerpt, then my summary and interpretation of the sources, and the inferences that I think follow from the sources. It’s pretty similar to what I would say if I was lecturing. I write the summaries in an informal tone intended to start a conversation. Here’s an example:
And here’s the kind of feedback my students write to me (you’ll see my comments back to them in there too).
Highlights of student feedback:
Noticing connections to earlier learning
When I read about finite bandwidth, it seemed like something I should have already noticed — that amps have a limit to their bandwidth and it’s not infinite
When vout tries to drop, less opposing voltage is fed back to the inverting input, therefore v2 increases and compensates for the decrease in Avol
Noticing confusion or contradiction
What do f2(OL) and Av(OL) stand for?
I’m still not sure what slew-induced distortion is.
I don’t know how to make sense of the f2 = funity/Av(CL). Is f2 the bandwidth?
In [other instructor]‘s course, we built an audio monitor, and we used an op amp. We used a somewhat low frequency (1 KHz), and we still got a gain of 22.2 If I use the equation, the bandwidth would be 45Hz? Does this mean I can only go from 955 Hz to 1045 Hz to get a gain of 22.2?
Asking for greater precision
What is the capacitance of the internal capacitor?
Is this a “flipped classroom”?
One point that stuck with me about many “flipped classroom” conversations is designing the process so that student do the low-cognitive-load activities when they’re home or alone (watching videos, listening to podcasts) and the high-cognitive-load activities when they’re in class, surrounded by supportive peers and an experienced instructor.
This seems like a logical argument. The trouble is that reading technical material is a high-cognitive-load activity for most of my students. Listening to technical material is just as high-demand… with the disadvantage that if I speak it, it will be at the wrong pace for probably everyone. The feedback above is a giant improvement over the results I got two years ago, when second year students who read the textbook would claim to be “confused” by “all of it,” or at best would pick out from the text a few bits of trivia while ignoring the most significant ideas.
The conclusion follows: have them read it in class, where I can support them.
The author of Gas Station Without Pumps has posted this thought-provoking list of technician-level skills every engineer should have:
- Reading voltage, current, and resistance with a multimeter.
- Using an oscilloscope to view time-varying signals:
- Matching scope probe to input of scope.
- Adjusting time-base.
- Adjusting voltage scale.
- Using triggering.
- Reading approximate frequency from display.
- Measuring time (either pulse width or time between edges on different channels)
- Using a bench power supply.
- Using a signal generator to generate sine waves and square waves. Hmm, only the salinity conductance meter uses an AC signal so far—I may have to think of some other project-like labs that need the signal generator. Perhaps we should have them do some capacitance measurements with a bridge circuit before building a capacitance touch sensor.
- Using a microprocessor with A/D conversion to record data from sensors.
- Handling ICs without frying them through static electricity.
- Using a breadboard to prototype circuits.
- Soldering through-hole components to a PC board. (I think that surface-mount components are beyond the scope of the class, and freeform soldering without a board is too “arty” for an engineering class.)
I really like this course-design approach, and I think it will yield a very interesting, engaging course.
I started thinking out loud about the kinds of conceptual difficulties I’ve noticed and assessments I use. When I realized it was turning into yet anther one of my marathon comments, I thought I’d open up the conversation over here.
1. Using a Multimeter
When teaching students how to use meters, I’ve found it interesting and conceptually useful for them to use their meters to measure other meters. For example, use the ohmmeter to measure the input resistance of the voltmeter, or use the ammeter to measure the output current of the diode checking function. It gets students thinking about what the meters do, helps them get a sense for the differences between meters (especially if you have a number of makes and models available), and can help them build their judgement about when, for example, a current-sense resistor’s contribution to a series circuit can no longer be ignored.
It makes for useful test questions as well: draw a meter measuring another meter, and have students justify their predictions of what each meter will read.
2. Using an Oscilloscope
The trigger function is difficult for a lot of my students to make sense of. This becomes evident when they make a measurement on channel 1, then make another measurement on channel 1, then infer the phase relationship between two signals that were not measured simultaneously. This also makes a useful test question — describe this scenatio, and ask students to explain specifically why the conclusion is not valid.
I’ll also be curious to know if the students are able to relate the techniques for vector addition to the reality of phase shift in the time domain, including the apparently illogical concept that in a series RC circuit, the resistor’s voltage can lead the supply’s. (Where did the resistor get that voltage before the supply turned on? would be the type of frustrated question my students would be upset about.) Although introducing the concept of start-up transients seems like it should increase cognitive load, I find that my students welcome it as a way to resolve this apparent contradiction. This is easier, of course, if you have storage scopes or (better yet) simulation software.
In case it’s useful to anyone to have an electronic copy of an “oscilloscope grid” (for including in test questions, etc.), here’s one I made. (Whoops, upload problems. Will add it here as soon as the upload succeeds).
When we start making a lot of use of the oscilloscope, that’s when the headaches start to flare up about “what ground is exactly, anyway.” Lots of fruitful discussions are possible; what does the scope’s ground clip mean if the the scope is plugged into an isolation transformer? (Note, some isolation transformers isolate the grounding conductor, others don’t.) What happens when two probes have their ground clips in different places? (This is another favourite test question of mine: what is the voltage across component X, where X is shorted out by scope ground clips).
What does AC coupling do, exactly? Why would you use it — why not just adjust the volts per division? Asking them to measure the magnitude of the ripple on a DC supply can help them make sense of this. My students also often have trouble being confident of the difference between moving the display level on the scope and adding DC offset on the signal generator.
3. Using a Bench Power Supply
This is fairly straightforward, except for current limiting (especially on a supply where the current limit knob is not graduated, or maybe even labelled in any way). I find it useful for students to be able to choose a replacement fuse (and shop for it on a supplier’s website). This apparently simple procedure can help students grapple with the meaning of the distinction between voltage and current. For beginners, it is counter-intuitive to imagine that there is voltage across an open fuse, even though there is no current.
4. Using a Signal Generator
Measuring things in a bridge circuit is another conceptually useful experience; I use it to motivate Thevenin’s theorem, since a bridge circuit has no components in series nor in parallel, making it resistant to simple circuit-solving strategies.
Other uses of a signal generator: if applicable, you could have your students perform a frequency sweep of something. This can yield interesting insights, like noticing that, due to stray capacitance, high-pass filters are actually band-pass filters.
Soldering well, and accurately inspecting soldering, are great skills to have. Surface-mount components might not be out of the question; if you want to introduce them, it’s not much harder to solder a 1206 chip resistor than a through-hole component, and can reasonably be done with a regular iron. Knowing the difference between lead and lead-free solder might be useful too, especially as it relates to reliability and disposability.
I go back and forth about using perf-board. On one hand it’s great for cheap soldering practice. On the other hand, the lack of solder mask makes it very difficult for beginners to make tidy joints, with solder running down the lengths of the traces.
I’ll probably keep using this post as a catalogue of common difficulties. If anyone can think of others (or has suggestions of other technician-level skills that engineers should have), I’d be curious to hear them.
My inquiry-based experiments forced me to face something I hadn’t considered: my lack of confidence in the power of reasoning.* I spent a lot of time worrying that my students would build some elaborately misconstrued model that would hobble them forever. But you know what? If you measure carefully, and read carefully, and evaluate your sources carefully, while attending to clarity, precision, causality and coherence, you come up with a decent model. One that makes decent predictions for the circumstances under test, and points to significant questions.
Did I really believe that rigorous, good quality thinking would lead to hopelessly jumbled conclusions? Apparently I did, because this realization felt surprising and unfamiliar. Which surprised the heck out of me. If I did believe that good quality thinking led to poor-quality conclusions (in other words, conclusions with no predictive or generative power), where exactly did I think good-quality conclusions came from? Luck? Delivered by a stork? I mean, honestly.
If I was my student, I would challenge me to explain my “before-thinking” and “after-thinking.” Like my students, I find myself unable to explain my “before-thinking.” The best I can do is to say that my ideas about reasoning were unclear, unexamined, and jumbled up with my ideas about “talent” or “instinct” or something like that. Yup — because if two people use equally well-reasoned thinking but one draws strong conclusions and the other draws weak conclusions, the difference must be an inherent “ability to be right” that one person has and the other person doesn’t. *sigh* My inner “voice of Carol Dweck” is giving me some stern formative feedback as we speak.
Eventually…I expect that they’ll get to the “right” answer. At least at my level, they don’t get into anything subtle enough that a mountain of evidence can be explained equally well by different models.
If they don’t get there eventually, either I haven’t done my job asking the right questions or they’re fitting their evidence to their conclusion.
Last year, I worried a lot that my teaching wasn’t strong enough to pull this off — by which I mean, I worried that my content knowledge of electronics and my process knowledge of ed theory wasn’t strong enough. And you know? They’re not — there’s a lot I don’t know.
But for this purpose, that’s not what I need. I have a more than strong enough grasp of the content to notice when someone is being clear and precise, whether they are using magical thinking or causal thinking, begging the question, or being self-contradictory. And I have the group facilitation skills to keep the conversation focussed on those ideas. Noticing that helped me sleep a lot better.
A slight difference from Jason’s note above: I don’t expect my students to get to the “right” (i.e. canonical) answer. The textbook we use teaches the Bohr model, not quantum physics. If they go beyond that, great. If they don’t, but come up with something that allows them to make predictions within 5-10% of their measurements, draw logical inferences about what’s wrong with a broken machine, and it’s clear, precise, and internally consistent, they’ll be great at their jobs. And, it turns out, there is a limited number of possible models that satisfy those criteria, most of which have probably been canonical sometime during the last 90 years. I don’t care if they settle on Bohr or Pauli. I care that they develop some confidence in reasoning. I care that they strengthen their thinking enough to justifiably have confidence in their reasoning, specifically.
* For the concept of “confidence in reasoning,” I’m indebted to the Foundation for Critical Thinking, which writes about it as one of their Valuable Intellectual Traits.
Frank Noschese just posed some questions about “just trying something” in problem-solving, and why students seem to do it intuitively with video games but experience “problem-solving paralysis” in physics. When I started writing my second long-ish comment I realized I’m preoccupied with this, and decided to post it here.
What if part of the difference is students’ reliance on brute force approaches?
In a game, which is a human-designed environment, there are a finite number of possible moves. And if you think of typical gameplay mechanics, that number is often 3-4. Run left, run right, jump. Run right, jump, shoot. Even if there are 10, they’re finite and predictable: if you run from here and jump from exactly this point, you will always end up at exactly that point. They’re also largely repetitive from game to game. No matter how weird the situation in which you find yourself, you know the solution is some permutation of run, jump, shoot. If you keep trying you will eventually exhaust all the approaches. It is possible to explore every point on the game field and try every move at every point — the brute force approach (whether this is necessary or even desirable is immaterial to my point).
In nature, being as it is a non-human-designed environment, there is an arbitrarily large number of possible moves. If students surmise that “just trying things until something works” could take years and still might not exhaust all the approaches, well, they’re right. In fact, this is an insight into science that we probably don’t give them enough credit for.
Now, realistically, they also know that their teacher is not demanding something impossible. But being asked to choose from among infinite options, and not knowing how long you’re going to be expected to keep doing that, must make you feel pretty powerless. I suspect that some students experience a physics experiment as an infinite playing field with infinite moves, of which every point must be explored. Concluding that that’s pointless or impossible is, frankly, valid. The problem here isn’t that they’re not applying their game-playing strategies to science; the problem is that they are. Other conclusions that would follow:
- If there are infinite equally likely options, then whether you “win” depends on luck. There is no point trying to get better at this since it is uncontrollable.
- People who regularly win at an uncontrollable game must have some kind of magic power (“smartness”) that is not available to others.
And yet, those of us on the other side of the lesson plan do walk into those kinds of situations. We find them fun and challenging. When I think about why I do, it’s because I’m sure of two things:
- any failure at all will generate more information than I have
- any new information will allow me to make better quality inferences about what to do next
I don’t experience the game space as an infinite playing field of which each point must be explored. I experience it as an infinite playing field where it’s (almost) always possible to play “warmer-colder.” I mine my failures for information about whether I’m getting closer to or farther away from the solution. I’m comfortable with the idea that I will spend my time getting less wrong. Since all failures contain this information, the process of attempting an experiment generally allows me to constrain it down to a manageable level.
My willingness to engage with these types of problems depends on a skill (extracting constraint info from failures), a belief (it is almost always possible to do this), and an attitude (“less wrong” is an honourable process that is worth being proud of, not an indictment of my intelligence) that I think my students don’t have.
Richard Louv makes a related point in Last Child in the Woods: Saving Our Children From Nature-Deficit Disorder (my review and some quotes here). He suggests that there are specific advantages to unstructured outdoor play that are not available otherwise — distinct from the advantages that are available from design-y play structures or in highly-interpreted walks on groomed trails. Unstructured play brings us face to face with infinite possibility. Maybe it builds some comfort and helps us develop mental and emotional strategies for not being immobilized by it?
I’m not sure how to check, and if I could, I’m not sure I’d know what to do about it. I guess I’ll just try something, figure out a way to tell if it made things better or worse, then use that information to improve…
I’m on a jag about what confusion is and whether it’s necessary for learning. My latest Gordian knot is about how confusion relates to pseudoteaching.
It seems that some condition of readiness has to happen before students can internalize an idea. Obviously they will need some background knowledge, and basics like enough sleep, etc. But even when my students have the material and social and intellectual conditions for learning, it often seems like there’s something missing. To improve my ability to promote that readiness, I have to figure out what the heck it is. I’m wondering if confusion is part of the answer.
Dan Goldner writes that students must have “prepared a space in their brain for new knowledge to fit into” — that they must have found some questions that they care about.
Grace points to the need for conflict in a good story. She advocates creating a non-threatening “knowledge gap” using either cognitive dissonance or curiosity.
Dan Meyer, obviously, has made it an art form. He calls it “perplexity” and distinguishes it from confusion (or sometimes describes it as a highly fruitful kind of confusion). If I’m reading it right, perplexity = conflict + caring about the outcome.
Rhett Allain has a great post about the “swamp of confusion” (go look — the map illustration is worth it). He points out that a lifetime of pseudoteaching can convince students that working through confusion is impossible, or that teachers design courses to go around confusion, so that if you feel confused, either the teacher is incompetent, or you did something wrong. He also pulls out some of the assumptions about “smartness” that people often hold about confusion: “If this IS indeed the way to go, I must be dumb or I wouldn’t be confused.”
Finally, the word “confusion” comes up in Derek Muller’s points about using videos to present misconceptions about science (videos that explained the “right answers” were clear but ineffective; videos that included common misconceptions were confusing but effective).
What I get in my classroom, which often gets called confusion, is conflict + anger. Or possibly conflict + fear, or conflict + not caring (it’s possible that “not caring” is made out of anger, fear, and/or fatigue). Just a guess: students get angry when they think I’ve created conflict that is unnecessary, or when they think I’ve created it carelessly. These are worth thinking about. Conflict can be threatening or exhausting. Have I created the right conflict? Is my specific method of creating conflict going to improve our learning, or did I use videos/whiteboards/particle accelerators because I think they’re fun and cool and make me look like a “with it” teacher?
Given that my students use the word “confusion” for a lot of situations where the next move is not immediately clear, I bet they would call all of these things confusion.
Which ones encourage learning? Are any of them necessary? Next year, I think I will ask students to make a note in their skills folder (portfolio-like thing with loose-leaf in it) to record confusions, so we can get a better grip on it.
In the meantime, I’m not having trouble with intellectual conflict. By all the accounts above, the conflict is not just an inevitable side-effect but one of the main components of learning. We’ve got lots of it to go around, and I hope that opening a conversation about it earlier in the semester will help students understand it as part of learning. Bringing to light our conflicts is part of what allows us to transform them into new understanding.
That leaves me with the “non-threatening” part, the “caring enough about the outcome to want to resolve it” part, and the “skills for dealing with it” part.
As part of the course evaluation for High-Reliability Soldering, I asked these questions.
Are there different kinds of confusion? If so, do you learn more from certain kinds? Why?
There is confusion that happens when you get too much information at the same time. And confusion that happens when you think you know the answer but it’s the next best thing. I learn more when getting it wrong and learning from my mistakes.
I think their 2 different kinds of confusion, 1st being “something you are just starting to learn” and 2nd being “something that you can’t understand.” You can learn from both because the more you learn, the more new things you will discover.
Yes. New confusion, I learn by practicing and stuff I thought I knew confusion, makes me get frustrated because it makes me think that my whole theory is wrong even tho parts might be right.
No, all confusion is the same, the attitude I have can be different in different circumstances.
What would help you get the most learning out of confusing ideas? Why?
Someone that knows what they’re talking about explaining it in a way I can understand. Or just a lot of practice, I find experience a key way to understand something.
Trying not to get frustrated, making sure I understand the idea first
Having time to take my time and look over what I was confused about. Going over it as a class if more than one person is confused about the same thing. You wouldn’t feel alone, like you were the only one who didn’t understand.
Just sitting down with someone who understands, and knows what they are doing. Because I know I can ask questions that they can answer properly and with knowledge and not just kinda guess at the answer.
In our previous conversation about confusion, I got the impression that “I’m confused” could mean “I can’t tell these ideas apart/I’ve come to contradictory conclusions” (the dictionary definition of confusion), or it could mean “I don’t know” (i.e. a word, a procedure, or how to approach a problem). They are very hard on themselves when they encounter things they don’t know. I’m guessing that the thought process goes something like “Schools are set up to only give you information you already know. So if I don’t know something, it must mean that I did something wrong or I’m stupid.” I’m guessing here, I’ll have to come back to this.
From the responses above, it seems that my students use “confusion” even more broadly to include overwhelm, fatigue, and “when you thought you knew the answer.”
A number of students distinguished between the confusion of not knowing and the confusion of thinking you knew something. I’m going to pay closer attention to which reactions go with which conditions. My gut feeling is that “not knowing” usually results in frustration and requests for answers, but “when you thought you knew” results in anger and accusations of unfairness.
I might further distinguish the “when you thought you knew” confusion into two types: things you thought you knew based on previous experience, and things you thought you knew caused by pseudoteaching by yours truly…
You can see the tension between what students see as “guessing” and what I see as “gathering evidence and figuring it out.” A related point is throwing out an entire model or train of thought because of finding one mistake. Obviously, I need to approach this differently. Maybe help students take more control of finding evidence, linking evidence to inferences, so that if they find contradictory evidence, it’s more clear which inferences it affects, or which evidence they need to double-check.
Finally, the responses show a pattern of responding to confusion by seeking an authority to hand over the right answer. That’s not always a bad idea, of course. But there are a lot of other approaches missing from the list. I get the feeling this is going to be one of those long-term headaches that’s going to make me reorganize the inside of my brain.
Is confusion useful?
Confusion as my students see it incorporates a lot of things, some of which are not useful for learning (fatigue, self-flagellation). My definition of confusion barely overlaps with theirs — including things like perplexity, conflict, and curiosity. Gotta sleep on this one before I can figure out what on earth to do about this.
However, asking my students about confusion was extremely useful. It definitely created perplexity for me, which means it’s driving my learning. I feel like I have a permanent reservation at the all-you-can-eat food-for-thought buffet.
My students and I talked a lot about confusion last month. I’m trying to figure out what confusion is, exactly, how my students and I can respond to it. I’m also seriously considering whether confusion is necessary for learning and, if so, how we can create fruitful confusion. Asking my students “what does confusion mean” just causes confusion. So I’ve experimented with ways of posing the question. In my last post, I wrote about the answers I got when I asked them to finish the sentence “This morning I was confused when…”. There were two follow-up prompts. Here are some samples of student responses.
B. I was confused so I…
“asked a classmate”
“checked the book”
“asked the teacher”
“don’t remember what I did.”
C. That helped/didn’t help because…
“we talked and agreed on which one it was.”
“I realized that it was the same type of mount, just talking about a different part on it.”
As you can see, Part C was the toughest to answer. Some students evaluated the process they used (talking to a classmate), some evaluated the results they got (distinguished between two things). But most of them left it blank.
Does the lack of answer mean “I understand your question, but I don’t know how to answer,” or “I understand your question, but I don’t think it’s important enough to answer,” or “I don’t get what this question means?”
At the end of the day, I wrapped up by showing my students the Veritasium video about the effectiveness of science videos. I was hoping they would relate their experiences of confusion to the confusion described in the video. Some of the answers they wrote are here. So I asked them to give some thought, as they watched, to how confusion affects their learning.
“I get frustrated and mad, so I say screw it, but when I come back I think about what confused me”
“…slows down my learning at first, but once I understand what I was confused about I will remember it for a long time.”
“[allows] me to ask questions about things I don’t understand and learn new concepts that I never thought before or it is a different word for what I know the meaning of.”
“…makes me lose my train of thought.”
“… it prompts me to learn more about so that I understand it. So really confusion is quite essential to my learning.”
” I get upset when I get confused about things I “thought” I knew. I have a hard time figuring out what to do then because I question everything I knew about the subject.”
1. The theme of the day was not very cohesive. Most students were not able to relate to the Veritasium video, and seemed unsure why we were watching it. They also did not seem to connect the questions throughout the day. I don’t think the questions were particularly well-posed, but I’m not sure how to improve them.
2. I heard what the teacher wants to hear. There were a lot of “confusion is essential to my learning” type of answers. It sounds nice, but no one in the class responds to confusing ideas by saying “oh neat — I’m going to learn something important! Thanks, teach!” The responses are either giving up, demanding to know what the answer is, or anger. Since most people didn’t write at all about those, I either didn’t set this up in a way that helped people think past the clichés, or didn’t help them trust me with their real thoughts.
3. Some students may define as confusing any new ideas that are frustrating. I was assuming that confusion implied ambiguity, which is part of why “I’m confuuuuused!” pushes my buttons so badly. Being accused of imprecise communication is, well, fightin words. (Yes, I know I have issues.) But many students used the word “confusion” is a much broader way than I do.
4. There are some clues here about who needs which confusion fix-up techniques. If you are losing your train of thought, you may need techniques for “holding” your thinking. If you are getting upset, you may need techniques for switching tasks, taking a different approach, getting up and walking around, etc.
Next September we’ll take some time in the first week to gather some experimental data on confusion-wrangling. Maybe just pick a few confusing ideas that we’re not ready to figure out yet, practise responding to them in different ways, and record the effects. I bet we could shoot some great video…