You are currently browsing the category archive for the ‘troubleshooting’ category.

I’ve just agreed to be the head judge for a LEGO robot competition for high school students.  In light of my workload this year, that probably means I have lost my marbles.  However, I couldn’t resist.  I judged last year and found it extremely interesting.  I’m looking forward to meeting others in the province who love the combination of kids and robots, working with the judges to develop a consistent way to score teams’ performance, and just getting off campus more.  Of course, if I ended up recruiting kids into my program, that wouldn’t be so bad either).

Acadia University hosts the local FIRST LEGO League competition for 9-14 year olds, which is co-ordinated internationally.  Four years ago, they decided to run an  independent high-school competition so that kids who had aged out of FIRST could continue to compete.  To see the details, go to the competition page and click on High School Robotics.

My responsibilities are

  • defining the challenges (this needs to happen ASAP)
  • getting the word out about the competition, which is in February
  • answering questions from school teams about the competition and the challenges
  • helping with orientation for the judging team

The teams borrow or buy a robot kit and get three challenges to complete — things like moving pop cans, dropping balls into containers, detecting and navigating around obstacles, etc.  The teams get two runs through the course, with time in between the runs to make changes to their robots.

How Teams Are Evaluated

  1. An interview with two judges before their robot runs the course.  They have to explain their code, demonstrate their robot, and answer questions about their process
  2. An interview between the two runs.  They have to explain what went well, what didn’t go well, and how they are going to improve.

Things I Noticed Last Year

  1. The teams tended to be well balanced — either the students were all able to explain each aspect of the robot, or each student was able to explain one aspect in detail.  There was the occasional student who didn’t seem to be as involved, but not many.
  2. The coaches varied widely in their degree of involvement.  There were some programs that I was pretty sure the teams wouldn’t have come up with on their own, but they seemed able to explain the logic.
  3. Almost all the robots performed poorly on the competition field, with many of the planned features not working.  This surprised me, since organizers publish the exact dimensions and features of the competition field months in advance.  Surely if the design was not meeting the requirements, the students knew that in advance…
  4. Some teams were able to articulate what exactly was not working after their first run (for example, the robot ran into the wall and then couldn’t turn around), and some teams were not.
  5. Regardless of their ability to diagnose the problem, most teams were not able to troubleshoot in a logical way.  The changes they proposed to improve for their second run often addressed an unrelated component — for example, if their robot had incorrectly identified the difference between white and black cans, they might propose to change the motor speed.

For those of you who’ve participated in robotics or similar competition, any suggestions?  I’m especially interested in these questions:

  • What helps new teams get involved?
  • What features of the challenges can help kids think independently and algorithmically?
  • What practices in the design or judging can promote more causal thinking?

A technician’s career depends on their troubleshooting skills, but we don’t teach that.  Instead, we teach students to build and analyze circuits.  We know that they will have “troubles” along the way, and we hope that they will learn to “shoot” them.  Or worse — we assume that troubleshooting is a “knack” bestowed by “fortune,” and that our function is to weed out students who don’t have it.

Problem-solving fortune cookie, by Tomasz Stasiuk, via Flickr

Bull.

Some students enter with those skills, some don’t, but all of them understandably interpret “building circuits” as the point.  That’s what we teach, that’s what we assess, right?  This causes weird tensions throughout the program.   Students rarely attend to or improve troubleshooting skills deliberately.  This is the story of how I’m starting to teach that this year.

Last spring, I starting feeling frustrated by an underlying pattern in my classroom that I couldn’t put my finger on.  Eventually, I decided that entailment was part of it.  My students were only sometimes clear about which was the cause and which was the effect, often begged the question, missed the meaning of stipulative definitions, and made unsupported assumptions (especially based on text, but sometimes based on the spoken word).  We had no shared vocabulary to talk about what it means for a conclusion to “follow” from a set of premises.  My students obviously troubleshoot in their daily lives, whether it’s “where are my keys” or “why is the baby crying.”  When the car won’t start, they don’t check the tire pressure.  Yet when their amp has no bias, they might check the signal source before checking the power supply (this makes no sense).

I was only occasionally successful at tapping into their everyday logic.  In a program that ostensibly revolves around troubleshooting, this is a serious problem.  I started discussing troubleshooting explicitly in class, modeling techniques like half-split and keeping an engineering log, and informally assessing my students on their use.  It wasn’t really helping them think through the ideas — only memorize some useful techniques.  I started wondering whether I should teach symbolic logic or Euclidean proofs somehow. I read about mathematical and scientific habits of mind, but there seemed to be an arbitrarily large number of possible candidates and no clear pattern language to help a newcomer decide which one to use when.

I started teaching technical reading, and boiled down the reading tactics to

  • Choosing a purpose
  • Finding the confusion
  • Checking for mental pictures/descriptions
  • Using structural clues
  • Making connections to what you already know
  • Asking questions/make inferences

That helped.  We started to have a way to talk about the difference between what an author means and what I think.  Because of that, I discovered that my students had no idea what an inference was.

I started reading everything I could about logic and critical thinking.  It lead me to a lot of sentimental claptrap.  There’s a whole unfortunate genre of books about “harnessing the power of the whole brain” and “thinking outside the box” and other contentless corporate-pep-talk cheerleading.  On the rare occasions that these materials contained teaching ideas, they ignored the “critical” part of critical thinking altogether and seemed satisfied by the “creativity” of students thinking anything at all, concluding that we should “celebrate our students’ (employees’) ideas.”

Yeah.  I get that already.

One of the things I read was the website for the Foundation for Critical Thinking (FCT), and I confess that I didn’t have much hope.  It looked a lot like all the others.  I started reading about their taxonomy of thinking and found it simplistic.  I let it sit in the back of my brain for the summer.  But it kept coming back.  The more I read it, the more useful it seemed.  It helped me notice and connect other threads of “thinking moves” that I felt were missing in my classes:

  • What premises are presupposed by a conclusion?
  • What other conclusions follow from those premises?
  • Are there other sets of premises from which this conclusion would also follow?
  • What is the difference between a characteristic and a definition?  Between a necessary and sufficient condition?
  • Generalize based on specific examples
  • Give a specific example based on a generalization
  • Try to resolve discrepancies
  • Identify the steps that lead from premise to conclusion

So I read more.  Their basic model of critical thinking has 8 elements (Purpose, Questions, Information, Inferences, etc.) and 9 standards against which the elements should be assessed (Clear? Accurate? Logical? Significant? etc).  As you can see, there’s a fair amount of overlap with the reading comprehension tactics.  The FCT also discuss 7 intellectual traits or attitudes that they consider helpful: intellectual humility, perseverance, autonomy, empathy, integrity, etc.

Then I read an essay called Why Students and Teachers Don’t Reason Well. The authors discuss responses and perspectives of teachers who have taken FCT workshops — see the section called “The Many Ways Teachers Mis-Assess Reasoning.”  It shed a lot of light on the above-mentioned sentimental claptrap.

Finally, here is their paper on faculty emphasis on critical thinking in education schools across the US. Interview responses include samples of both weak and strong characteristic profiles. I found it fascinating.  Most ed school profs who were interviewed knew that they wanted to emphasize critical thinking in their classes, but couldn’t come up with a clear definition of it… very much the position I was in.

I’m a little wary of putting too much weight on this one model, but it has been very helpful in clarifying what I’m looking for in my students’ thinking (and in my own).  I’m not convinced that my definition of critical thinking is exhaustive, but at least I have one now (this is better than the vague feeling of frustration and unease I had before).   The expected benefit is better conversations about troubleshooting — inferences about causes, implications of various solutions, the ability to generate better-quality questions, etc. Some unexpected benefits include

What I say to students— it helps me use specific and consistent language when I write to them. I’m focusing on clarity, precision, and relevance in their questions, and clarity and logic in their inferences.  Also, an agreed-upon language about high-quality thinking means that I’m training myself to stop writing “This is impressive reasoning” on their papers.  Who cares that I’m impressed?  Was the purpose to impress me?  Or was the purpose to reason well?  I’m learning to write “Using a diagram helped clarify the logic of this inference,” and let them decide whether they’re impressed with themselves.  It’s not perfect (I’m still doing a lot of the judging) but I think it’s an improvement.  As I mentioned, any agreed-upon taxonomy would work.  I just haven’t found any others that don’t irritate me.

What they say to themselves — my students are already starting to expect that I will write “can you clarify what you mean by x exactly?” Language to that effect is starting to show up in the feedback they write on self-assessments.

What they say to each other — I’ve started using real-time feedback on group discussions (more soon), and realized that I’m looking for all the same things there (“When you say x, do you mean…” and “If x, does that mean y?”).

What I hear — I’m learning to hear out their current conceptions, regardless of accuracy.  Giving them feedback on the quality of their thought takes my focus away from “rightness” (for now anyway).  It also helps me appreciate how exquisitely logical their thinking sometimes is, complete with self-aware inferences that clearly proceed from premise to conclusion.  I’m embarrassed to say that my knee-jerk desire to “protect” them from their mistakes meant that I often insisted that they mouth true statements for which they had no logical justification — in other words, I beat the logic out of them.  Then I complained about it.

What they say to me — students have started asking questions in class that sound like “can you clarify what you mean by x” and “when you say x, does that mean y?”  Holy smoke.  From a group that’s only been in college for 10 days, I had to hear it to believe it.

I finished my first attempt at a skills list.

Status:

It’s way too long.  There are 48 skills on it.  Unfortunately, the curriculum actually requires that all of them be crammed into the semester.  It’s a 15-week course, 60 hours.  Not quite one skill per hour.  It’s also light on troubleshooting.  I’m trying to build in the skills that troubleshooting requires.  They will inevitably troubleshoot.  I’ll help them.  I’ll give them pointers.  I’ll host discussions and strategy sessions in class.  But I might not grade them on it.  Assessing troubleshooting (more than the once-over included here) might have to go in the Semiconductor Circuits course.

Initial plan for final grade:

100: sure, if you have 5’s on everything!

80-99: no score below 4 (average the scores)

60-79: no score below 3 (average the scores, cap at 79)

Yes, this means that I’m willing to fail someone if they don’t get the point about even one of these.  I hope I’m making the right decision here.  Shortening the list of skills will probably help.

Tests:

You must show some evidence of preparation before to assess.  I don’t care what it is but I want proof of some kind (data you experimented with, practise problems you completed, etc.) This is about helping students take control of the cause-and-effect between their work and their score (as opposed to my work and their score).  It also should give us some data about how effective study strategy X is for student Y.

Since I’ll have to tell the class that a quiz is coming up (probably a week in advance) so that they can prepare some evidence that they are ready, that means no pop quizzes.  I’m ok with that.

Shop Time:

A lot of the skills have to be demonstrated in the shop.  Ideal scenario: I pick 2-3 skills to assess.  At the beginning of our 3-hour shop period (or, if I’m really organized, a few days before), I announce which skills I will be assessing.  They have 3 hours to practise, and can let me know when they’re ready to demonstrate.   The lab book will be a good source of circuits to practise on, but this system means I will no longer require them to complete the lab exercise as it is written.  If they want to branch out and create their own experiment, I figure that’s a win.  If they do half the lab and their skills are up to scratch, why force them to do the rest?  If they need the rest for some other skill, they can always come back.  Or get started on it in the remaining time.  I suspect that they will try to make up their own experiments, realize it’s harder than it looks, and go back to following the lab book.  That might be ok, since they’re doing it with a clear target in mind (today I need to prove that I can use two scope probes at the same time).  At the same time, the students who are bored can have that extra challenge, and maybe score a 5 in the process.

In the past, many students have stumbled through the labs like zombies, skipping the explanations, the purpose, all that other direction-finding stuff.  They get to the end and have no idea what the point was.  Then then complain that the lab book is badly written.  *laugh*  And, well, it is.  But if a clear skill-target can wake them up and get them doing this work with purpose, it’ll be an improvement.  Especially since the circuits are so trivial, it’s often hard for the students to see the point. “Why bother hooking up two resistors in series?” Or, my absolute favourite — the vocational equivalent of “when in my life will I ever need this” is “when in industry will I ever need to hook up two resistors in series??”  Ah, but the point of the lab wasn’t the resistors.  It was the multimeter you used, and the process of testing your predictions.  And yes, you will need those in industry…

Things that will get confusing: if they are practising their measurement skills and need my help, I am basically tutoring them.  I would prefer not to tutor and assess the same skill in a single day, but I don’t want to discourage either one, so I guess I have to suck it up.  I don’t want to make them afraid to admit they don’t understand something.  Saying they have to assess another day if they got any help from me would just discourage them from asking for help when they need it.   Last year I had them do the lab, then evaluated them by asking them what the point was.  That worked out ok, gave what seemed like meaningful data, so I guess this is no worse.  And that part of the course is much less of a problem than the abstract skills and the metacognition.  One addition: I will try to make my help a little more “expensive” by requiring that they document their troubleshooting before I help them.  DOCUMENT.  YOUR.  TROUBLESHOOTING.   Amazing how troublesome those three words can be!  No, I do not mean “just tell me what you did, no need to write it down”.  No, I do not mean what your buddy at the next bench did when he looked at it.  No, I do not mean your hunch that the transistor is blown, so you chucked it and put a new one in, without measuring anything or testing the transistor.  Argh.  Maybe start with requiring that they have one documented troubleshooting attempt, then gradually increase the number throughout the semester.

Without further ado, skills list is in the next post.

Archives

I’M READING ABOUT