You are currently browsing the category archive for the ‘Problem-solving’ category.

I’ve just agreed to be the head judge for a LEGO robot competition for high school students.  In light of my workload this year, that probably means I have lost my marbles.  However, I couldn’t resist.  I judged last year and found it extremely interesting.  I’m looking forward to meeting others in the province who love the combination of kids and robots, working with the judges to develop a consistent way to score teams’ performance, and just getting off campus more.  Of course, if I ended up recruiting kids into my program, that wouldn’t be so bad either).

Acadia University hosts the local FIRST LEGO League competition for 9-14 year olds, which is co-ordinated internationally.  Four years ago, they decided to run an  independent high-school competition so that kids who had aged out of FIRST could continue to compete.  To see the details, go to the competition page and click on High School Robotics.

My responsibilities are

  • defining the challenges (this needs to happen ASAP)
  • getting the word out about the competition, which is in February
  • answering questions from school teams about the competition and the challenges
  • helping with orientation for the judging team

The teams borrow or buy a robot kit and get three challenges to complete — things like moving pop cans, dropping balls into containers, detecting and navigating around obstacles, etc.  The teams get two runs through the course, with time in between the runs to make changes to their robots.

How Teams Are Evaluated

  1. An interview with two judges before their robot runs the course.  They have to explain their code, demonstrate their robot, and answer questions about their process
  2. An interview between the two runs.  They have to explain what went well, what didn’t go well, and how they are going to improve.

Things I Noticed Last Year

  1. The teams tended to be well balanced — either the students were all able to explain each aspect of the robot, or each student was able to explain one aspect in detail.  There was the occasional student who didn’t seem to be as involved, but not many.
  2. The coaches varied widely in their degree of involvement.  There were some programs that I was pretty sure the teams wouldn’t have come up with on their own, but they seemed able to explain the logic.
  3. Almost all the robots performed poorly on the competition field, with many of the planned features not working.  This surprised me, since organizers publish the exact dimensions and features of the competition field months in advance.  Surely if the design was not meeting the requirements, the students knew that in advance…
  4. Some teams were able to articulate what exactly was not working after their first run (for example, the robot ran into the wall and then couldn’t turn around), and some teams were not.
  5. Regardless of their ability to diagnose the problem, most teams were not able to troubleshoot in a logical way.  The changes they proposed to improve for their second run often addressed an unrelated component — for example, if their robot had incorrectly identified the difference between white and black cans, they might propose to change the motor speed.

For those of you who’ve participated in robotics or similar competition, any suggestions?  I’m especially interested in these questions:

  • What helps new teams get involved?
  • What features of the challenges can help kids think independently and algorithmically?
  • What practices in the design or judging can promote more causal thinking?

Ok, I’m ready to play.

When you watch this, what do you wonder?

The game field of infinite moves

Frank Noschese just posed some questions about “just trying something” in problem-solving, and why students seem to do it intuitively with video games but experience “problem-solving paralysis” in physics.  When I started writing my second long-ish comment I realized I’m preoccupied with this, and decided to post it here.

What if part of the difference is students’ reliance on brute force approaches?

In a game, which is a human-designed environment, there are a finite number of possible moves.  And if you think of typical gameplay mechanics, that number is often 3-4.  Run left, run right, jump.  Run right, jump, shoot.   Even if there are 10, they’re finite and predictable: if you run from here and jump from exactly this point, you will always end up at exactly that point.  They’re also largely repetitive from game to game.  No matter how weird the situation in which you find yourself, you know the solution is some permutation of run, jump, shoot.  If you keep trying you will eventually exhaust all the approaches.  It is possible to explore every point on the game field and try every move at every point — the brute force approach (whether this is necessary or even desirable is immaterial to my point).

In nature, being as it is a non-human-designed environment, there is an arbitrarily large number of possible moves.  If students surmise that “just trying things until something works” could take years and still might not exhaust all the approaches, well, they’re right.  In fact, this is an insight into science that we probably don’t give them enough credit for.

Now, realistically, they also know that their teacher is not demanding something impossible.  But being asked to choose from among infinite options, and not knowing how long you’re going to be expected to keep doing that, must make you feel pretty powerless.  I suspect that some students experience a physics experiment as an infinite playing field with infinite moves, of which every point must be explored.  Concluding that that’s pointless or impossible is, frankly, valid.  The problem here isn’t that they’re not applying their game-playing strategies to science; the problem is that they are. Other conclusions that would follow:

  • If there are infinite equally likely options, then whether you “win” depends on luck.  There is no point trying to get better at this since it is uncontrollable.
  • People who regularly win at an uncontrollable game must have some kind of  magic power (“smartness”) that is not available to others.

And yet, those of us on the other side of the lesson plan do walk into those kinds of situations.  We find them fun and challenging.   When I think about why I do, it’s because I’m sure of two things:

  • any failure at all will generate more information than I have
  • any new information will allow me to make better quality inferences about what to do next

I don’t experience the game space as an infinite playing field of which each point must be explored.  I experience it as an infinite playing field where it’s (almost) always possible to play “warmer-colder.”  I mine my failures for information about whether I’m getting closer to or farther away from the solution.  I’m comfortable with the idea that I will spend my time getting less wrong.  Since all failures contain this information, the process of attempting an experiment generally allows me to constrain it down to a manageable level.

My willingness to engage with these types of problems depends on a skill (extracting constraint info from failures), a belief (it is almost always possible to do this), and an attitude (“less wrong” is an honourable process that is worth being proud of, not an indictment of my intelligence) that I think my students don’t have.

Richard Louv makes a related point in Last Child in the Woods: Saving Our Children From Nature-Deficit Disorder (my review and some quotes here).  He suggests that there are specific advantages to unstructured outdoor play that are not available otherwise — distinct from the advantages that are available from design-y play structures or in highly-interpreted walks on groomed trails.  Unstructured play brings us face to face with infinite possibility.  Maybe it builds some comfort and helps us develop mental and emotional strategies for not being immobilized by it?

I’m not sure how to check, and if I could, I’m not sure I’d know what to do about it.  I guess I’ll just try something, figure out a way to tell if it made things better or worse, then use that information to improve…

Can my students use their skills in real-world situations?  Heck, can they use their skills in combination with any single other skill in the curriculum?  When I was redesigning my grading system, I needed a way to find out.  It’s embedded in the “levels” of skills that I use, so I’ll explain those first.

What are these “levels” you keep talking about?

For every curriculum unit, students get a “skill sheet” listing both theory and shop skills.  Here’s an example of the “theory” side of a unit of my Electric Machines course.  (For a complete skills sheet, showing how theory skills correspond to shop skills, and the full story of how I use them, see How I Grade).  If I were starting this unit over, I would improve the descriptions of each skill (“understand X, Y, and Z” isn’t very clear to the students) and make the formats consistent (the first four are noun phrases, the last one is a complete sentence; things like that annoy me).  But this should give enough info to illustrate.

AC Motor Skills

So, about synthesis…

Realistically, all skills involve synthesis.  The levels indicate complexity of synthesis, not whether synthesis is involved at all.  My goal is to disaggregate skills only as far as I need to figure out what they need to improve — and no further.

L2 Example

For example, in the unit shown above, wound-rotor induction motors are at level-2.  That’s because they’re functionally almost identical to squirrel-cage motors, which we studied in the previous unit, and the underlying concepts help students understand the rest of the unit.

Quiz question: List one advantage and one disadvantage of wound-rotor induction motors compared to squirrel-cage motors.

Danger: a student could get this wrong if they don’t understand wound-rotor or squirrel-cage motors.  But the question is simple enough that it’s pretty clear which one is the problem.  Also, I have a record of the previous unit on squirrel-cage motors; both the student and I can look back at that to find out if their problem is there.

L3 Example

Synchronous, split-phase, and universal motors require a solid understanding of power factor, reflected load, and various ideas about magnetism (which the students haven’t seen since last year, and never in this context) so that puts them at level-3.

Quiz question: Synchronous motors can be used to correct power factor.  Explain  in 1-2 sentences how this is possible.

L4 Example

The level-4 skill in this unit is to evaluate a type of motor for a given application.

Quiz questions: “Recommend a motor for [scenario].  Explain why.”  Or “you need to replace a 3-phase AC motor.  Give 3 questions you should ask to help you select the best type.  Explain why.”

Why this is an improvement over last year

Last year I would have put only the level-4 problem on a test.  The solutions were either excellent or incoherent.  I couldn’t help people get better, and they couldn’t help themselves.

Level 5 Questions

You’ll notice that there are no level 5 skills on the skill sheet, even though the unit is graded out of 5.  Level 5 is what others might call “Mastery,” where Level 4 might be called “Proficiency.”  I teach up to Level 4, and that’s an 80%.  A Level 5 question is the name I give to questions that are not exercises but actually problems for most of the class.  There are a number of ways to get a 5/5.  All of them include both synthesis and a context that was not directly taught in class.  So the main difference between L4 and L5 isn’t synthesis; it’s problem-solving.

I occasionally put level-5 questions on quizzes; but not every quiz.  I might do it to introduce a new unit, or as a way of touching on some material that otherwise we won’t have time for. Other ways to earn a level 5: research a topic I haven’t taught and present it to me, or to the class.  Build something.  Fix something.  I prefer these to quiz questions; they’re better experience.  So I put examples of project topics on the skill sheet.  I also encourage students to propose their own topics.  Whether they use my topics or theirs, they have to decide what exactly the question is, how they will find the answer, and how they will demonstrate their skill.  We’ve had a ton of fun with this. I’ve sometimes put questions on quizzes that, if no one solved them, could be taken into the shop and worked on at your leisure.

I wrote lots in this post about level-5 questions that are independent projects, not quiz questions.  But I didn’t give any examples of level-5 questions that are on quizzes, so here are a few.

Reduced Voltage Manual Starter

This is a reduced-voltage manual starter on a DC shunt motor.  If I gave this question now, it would be trivial, because we’ve done a whole unit on starters.  But it was on a the second quiz of the semester, when the students had barely wrapped their heads around DC motors.  It’s a conceptually tough question because the style of drafting is unfamiliar to my students, there’s an electromagnet sealing-in the switch that doesn’t make sense unless you’re thinking ahead to safety hazards caused by power failures, and we hadn’t discussed the idea that there was even such a thing as a reduced-voltage starter.  But we had discussed the problem of high current draw on startup, and the loading effect that it causes, and the dangers of sudden-startups of machinery that wasn’t properly de-energized.  Those are the problems that this device is intended to solve.  One student got it.

Here’s one that no one solved, but someone built later in the shop.

Draw a circuit, with a square-wave power supply, where the capacitor charges up almost instantly and discharges over the course of 17 ms.

You may use any kind of component, but no human intervention is allowed (i.e., you can’t push a button or pull out a component or otherwise interfere with the circuit).  You do not need to use standard component values.

This requires time-constant switching, which means combining a diode and a capacitor.  They had just learned capacitors that week in one course, and diodes the previous week in a second course.  The knowledge was pretty fresh, so they weren’t really ready to use it in a flexible way yet.  But the diode unit was all about time-constant switching, and it’s a hard concept to get used to, so this question got them thinking about it from another angle.

Other examples: find total impedance in a parallel circuit, when all we’ve studied so far is series circuits.  If  they followed the rules for parallel resistance that we studied last year, it will work out; but they had just learned vectors, many of them for the first time, so most people added the vectors (instead of adding the inverses and inverting).  Or, find total impedance of a resistor-capacitor-inductor circuit, when all we’ve studied is resistors and capacitors.  Amazingly, most of the class got that one.  I was really impressed.  Again, it’s a question where the conclusion follows logically from tools that the students already have; but they might have to hold the tool by the blade and whack the problem with what they think is the handle.

In the new grading system, the skills list for each unit ends at 4/5.  Any student who wants a 5/5 must apply their skills to a novel context (not explicitly taught in class), choose their own problem-solving strategy, and combine ideas from at least two units.

I put a L5 question on a quiz at least once per unit as a way to assess problem-solving and synthesis.  They’re doing that quite well.  But they have had a host of unexpectedly positive benefits for the class. Top 10 reasons I love the “L5 question”:

1.  I can put anything on a quiz.  Since L5 questions by definition include synthesis, the students understand that anything is fair game: skills we’ve learned in other units, in co-requisite courses, in pre-requisite courses.  So L5 questions free me from the compartmentalization that the skills-based grading scheme might otherwise enforce.

 

2.  Students use it to practise “trying something” even though they don’t know the right answer. A L5 question on a quiz feels like a bonus question, so there’s less stigma attached to getting them wrong.  Unlike other levels, your score on L5 questions can not go down.  So, you can write any wacky thing that goes through your head, and there’s no penalty.  I give 30 minutes for quizzes, and deliberately choose the questions so that even the slower students finish in about 25 minutes.  That means there’s nothing left to do except think about the L5 question.  This helps students practice creating representations, choosing symbols, and thinking about unfamiliar things in a low-stakes environment.  (Who would have thought that a quiz would become a low-stakes environment??)

 

3.  It’s great for introducing a new unit.  Since every unit builds on the previous one, a student who has mastered the tricky questions from Unit 1 probably has all the skills to do the easy questions from Unit 2, if they can figure out how to apply them.  I throw these on the quiz and one of two things happen: some students get them right, in which case they’re primed to make sense of the new unit; some students get them wrong, in which case I’m introducing Unit 2 at the exact moment when they’re dying of curiosity to know how it works.

 

4.  It doesn’t have to go on a quiz. A L5 question can be a research project or an invention or a presentation to the class or an interpretive dance or a graphic novel, if it meets the synthesis/problem-solving criteria.

 
5.  It’s a great response to tangential questions in class (“Interesting, I’m not sure of the answer… How could you find out?  Sounds like a great L5 question.”)

 

6.  It’s a good way bring up neat topics that don’t quite fit in the curriculum. I make a list of some of them at the bottom of each skill sheet.  Any student who is curious can learn more about one of those topics.  It’s then up to them to propose both a question and the assessment of its answer.

 

7.  It’s an instant way to incorporate fix-it projects, service-learning opportunities, and inter-program collaborations that cross my desk every semester.

  • The head chef from the culinary program went to Europe and fried the power supply of his fancy sous-vide cooker, so a student traced the problem, selected and ordered a replacement for the obsolete part, and  put it back together.
  • A student in Disability Services needs help building a rehabilitative technology toy for developing fine-motor skills, so a team of four 1st-year students are working together to help him out.
  • The Academic Chair’s Roomba isn’t finding its dock properly anymore.  I ask for volunteers, and voila — Level 5 question.

I don’t need the thing to work at the end; I expect the student to have developed a sensible problem-solving strategy and synthesized their skills.  (Proving to me that it shouldn’t be fixed — for economic or other reasons — might be perfectly legitimate.  It depends on whether you have enough evidence to convince me).

 

8.  The students are free to propose a problem. About anything.  As long as it requires them to synthesize and problem-solve.  They can bring in something broken from home and work on it.  They can decide to experiment with something they read about in a trade journal or diy magazine.

  • The other day a student completed his assigned exercise early (using an inductor to light a 120V lamp using a 12V supply).  So he went out to the parking lot, removed the relay from his trunk latch, wired it into the lamp circuit as a crude boost chopper, and used a signal generator to energize the relay fast enough to make it look like the light was on continuously.
  • Two students figured out how to test a transistor before I taught the unit — so they asked for permission to destroy one to test their algorithm.  I agreed, on the condition that they teach their methods to the class.

The assessments don’t have to be involved or time-consuming; they just have to deepen a student’s thinking.  About 3/4 of my students have at least one L5 question.

 

9.  They are a built-in back-up plan for students who finish their work in class early.

 

10.  The students get stoked about them.

My first-year group is just starting to explore transistors.  I’ve tried to improve my teaching of “how to read a data sheet.” Instead of reading them the data sheet for each new component, we’ve worked on identifying the common structure of all datasheets, understanding the patterns of organization, symbols, and terminology, and building a list of reputable resources (textbook, manufacturer/distributor websites, Google).  On Friday I introduced the conceptual language of amplifiers, using a worksheet that asks them to plot voltage vs. current and draw conclusions about the patterns.  Once we had explored how a transistor controls current, I asked them to build an LED driver and told them where the schematic was.  Then I walked away for 20 minutes.

2N3904 Transistor

The catch: I hadn’t shown them what a transistor looks like.

I hadn’t given them a copy of the datasheet and walked them through it.  I hadn’t even shown them where the datasheet was.  I went back, a bit anxious, wondering if I would have a mutiny on my hands.

The class was calmly working away, several of them experimenting with working LED drivers and asking themselves what else they could do with it.  They found the model number on the schematic, found the data sheet in the back of the lab book, used the picture to find the component in their component kits (in some cases supplemented by Google) and figured out for themselves how to map the symbol to the physical thing.

I walked around signing off skill sheets, silently relieved.  It’s working.

Also: when expecting students to do something conceptually tricky and self-directed… it helps if it rewards them with something that moves, blinks, or makes noise.

I think I found a clue about my bimodal distribution of grades.

Two weeks ago I watched a live video feed of the Escape from the Textbook conference.  Paul Zeitz opened his presentation by pointing out that exercises and problems are different.  That’s when the gears in my head starting grinding like a seized transmission, and I missed most of the rest of what he said.

He elaborates in his book The Art and Craft of Problem Solving. According to Zeitz, exercises are things you know how to approach.  You might not know the answer right away, but you know what technique will work.  When you are tackling a problem, you don’t know yet which technique to use — your task becomes investigating or maybe inventing techniques.

What I realized, uncomfortably, is that I tend to teach and assign exercises.  Then I test problems.

Result: my tests were not differentiating between students who could use the techniques and students who couldn’t.  They were differentiating between students who had pre-existing problem-solving skills and those who didn’t.

The students would complain that they needed practise beforehand with the same kinds of problems that would be on the test.  I would complain that they hadn’t really understood the techniques, if they were memorizing how to apply them to a specific kind of problem.  All of us were right, sort of.  I had no idea that my students didn’t know how to do this.

As mortifying as it is to realize tBrain Mazehat I was blaming my students for a mistake I made, I can at least say that we used these situations to talk about learning and problem-solving.  We developed the analogy of “the maze”, which is the twisty, unlit path between you and the solution.  We talked about the difference between the kind of confusion you feel while inside the maze and the kind you feel when standing outside the maze before even having opened the heavy, scary-looking door.  We shared techniques for “stepping inside the maze” — picking a strategy that might be helpful and following it as far as it leads, even if you can’t see the exit.  (You definitely can’t see the exit from outside the door, so what’s to lose?) .  We talked about keeping your eyes open, while following your strategy, for clues (you can’t see those from outside the maze either).  This helped.  Students tried to step inside the maze, and we at least had a vocabulary for talking about the uncertainty and fear they felt.

Now I realize it’s not enough to teach them the techniques of exercises.  I must also teach them how to evaluate and maybe even invent techniques — so that they step inside the maze with a plan, rather than aimlessly.  Even if the plan turns out not to work, it’s important to practise choosing one and checking afterward how you could have chosen better.  My lessons no longer stop at “solve for the circuit’s time constant” or “solve for the circuit’s impedance”.  They go on with “what characteristics tell you whether you should use the time constant or impedance” and “are there any circuits where neither one applicable?  How can you recognize them? What could you do then?”

Archives

I’M READING ABOUT