Judging at a Lego Robotics Competition

I’ve just agreed to be the head judge for a LEGO robot competition for high school students.  In light of my workload this year, that probably means I have lost my marbles.  However, I couldn’t resist.  I judged last year and found it extremely interesting.  I’m looking forward to meeting others in the province who love the combination of kids and robots, working with the judges to develop a consistent way to score teams’ performance, and just getting off campus more.  Of course, if I ended up recruiting kids into my program, that wouldn’t be so bad either).

Acadia University hosts the local FIRST LEGO League competition for 9-14 year olds, which is co-ordinated internationally.  Four years ago, they decided to run an  independent high-school competition so that kids who had aged out of FIRST could continue to compete.  To see the details, go to the competition page and click on High School Robotics.

My responsibilities are

  • defining the challenges (this needs to happen ASAP)
  • getting the word out about the competition, which is in February
  • answering questions from school teams about the competition and the challenges
  • helping with orientation for the judging team

The teams borrow or buy a robot kit and get three challenges to complete — things like moving pop cans, dropping balls into containers, detecting and navigating around obstacles, etc.  The teams get two runs through the course, with time in between the runs to make changes to their robots.

How Teams Are Evaluated

  1. An interview with two judges before their robot runs the course.  They have to explain their code, demonstrate their robot, and answer questions about their process
  2. An interview between the two runs.  They have to explain what went well, what didn’t go well, and how they are going to improve.

Things I Noticed Last Year

  1. The teams tended to be well balanced — either the students were all able to explain each aspect of the robot, or each student was able to explain one aspect in detail.  There was the occasional student who didn’t seem to be as involved, but not many.
  2. The coaches varied widely in their degree of involvement.  There were some programs that I was pretty sure the teams wouldn’t have come up with on their own, but they seemed able to explain the logic.
  3. Almost all the robots performed poorly on the competition field, with many of the planned features not working.  This surprised me, since organizers publish the exact dimensions and features of the competition field months in advance.  Surely if the design was not meeting the requirements, the students knew that in advance…
  4. Some teams were able to articulate what exactly was not working after their first run (for example, the robot ran into the wall and then couldn’t turn around), and some teams were not.
  5. Regardless of their ability to diagnose the problem, most teams were not able to troubleshoot in a logical way.  The changes they proposed to improve for their second run often addressed an unrelated component — for example, if their robot had incorrectly identified the difference between white and black cans, they might propose to change the motor speed.

For those of you who’ve participated in robotics or similar competition, any suggestions?  I’m especially interested in these questions:

  • What helps new teams get involved?
  • What features of the challenges can help kids think independently and algorithmically?
  • What practices in the design or judging can promote more causal thinking?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s