You are currently browsing the category archive for the ‘Assessment’ category.

On Thursday, two students and I will present a workshop on “Exploring Student-Designed Assessment” at the Pan-Canadian Conference on Universal Design for Learning.  If you’ll be at the conference, please join us!

We hope to take UDL’s “multiple means” to a new level: how much of the assessment strategy can students design themselves?  We’ll explore how Standards-Based Grading can be used to turn over that control, by letting students apply for reassessment when they’re ready, as often as they are ready (up to once per week), and in the format that they choose. For a more detailed exploration of how this connects to UDL philosophy, see my previous post.

Why Should Students Design their Own Assessments?

Tim Bargen, one of the students with whom I’ve co-designed this workshop, values the aspect of UDL that prioritizes offering this flexibility to all students, not just those with a diagnosis or documentation.

“The idea of ‘Tight Goals, Loose Means’ is important because [everyone] has the same freedom. When doing things differently, rather than feeling a need to hide it, it’s easier to discuss the way you are doing things with classmates and share ideas. ”

He also values the opportunity to control what he spends more and less time on, and the ability to learn by reassessing skills as often as he chooses. “It allows students with differing prior knowledge to focus on the parts of the content that are difficult for them. It’s simply more efficient to ‘learn the hard way’ [by making mistakes and trying again.] It allows the student to more readily identify the areas they struggle with.”  He also points out that “reassessment” doesn’t just help you fix your mistakes in physics; it can also help you fix your “mistakes” in time management and other life skills.  “Mistakes” could be taken to include those made outside of the classroom, which prevent the student from engaging in or showing up for classes.”

What do we mean by “Student-Designed Assessment,” and how do we do it?

We’ll describe 2 techniques that combine UDL and Standards-Based Grading, give participants some time to try one of those techniques, and then take questions.  You can follow the structure of the workshop using the handout, in

1. Skill Sheets

Description and Example (screencast of handout pp 2-3)

An example of a skill sheet for an AC Circuits course. See handout and screencast for details.

Skill sheets are a tracking tool that students use to figure out what they’ve completed, what they need to work on, and what’s coming up next.

The magic happens when they are used in the context of Standards-Based Grading.  This means that students can choose for themselves when and how to demonstrate their mastery of each skill.

Description (screencast of excerpts from How I Grade)

2. Format-Independent Rubric

Description and Example (Screencast of handout pp 7-11)

If I’m going to encourage students to choose their own format for demonstrating mastery, I have to be ready for anything.  No one’s written a folk song about electrons yet, but I’m looking forward to that day.  In the meantime, I need a rubric that is as format and content independent as possible.

For example, I use a single rubric for anything that students build – whether they submit a report about it, a video about it, or demonstrate it to me in person.  It must include:

  • Predictions of electrical quantities
  • Measurements to test all predictions
  • Comparisons of predictions to measurements
  • Discussion of what happened, what could be causing it, and how it connects to other things the student has learned

This removes the burden of creating a new rubric for every new thing a student decides to do.  In the workshop, we will present example templates and invite participants to design their own.

We look forward to meeting you and exploring these topics together.  See you soon!

Workshop Handout

Early Warning Signs of Fascism

A local media outlet recently wrote

“Why the constant, often blatant lying? For one thing, it functioned as a means of fully dominating subordinates, who would have to cast aside all their integrity to repeat outrageous falsehoods and would then be bound to the leader by shame and complicity. “The great analysts of truth and language in politics” — writes McGill University political philosophy professor Jacob T. Levy — including “George Orwell, Hannah Arendt, Vaclav Havel — can help us recognize this kind of lie for what it is…. Saying something obviously untrue, and making your subordinates repeat it with a straight face in their own voice, is a particularly startling display of power over them. It’s something that was endemic to totalitarianism.”

How often does this happen in our classrooms?  How often do we require students to memorize and repeat things they actually think are nonsense?  

  • “Heavy things fall at the same speed as light things.” (Sure, whatever.)
  • “An object in motion will stay in motion forever unless something stops it.” (That’s ridiculous.  Everyone knows that everything stops eventually.  Even planets’ orbits degrade.).
  • When you burn propane, water comes out. (Pul-lease.)
  • The answer to “in January of the year 2000, I was one more than eleven times as old as my son William while in January of 2009, I was seven more than three times as old as him” is somehow not, “why do you not know the age of your own kid?

Real conversation I had with a class a few years ago:

Me: what do you think so far about how weight affects the speed that things fall?

Students (intoning): “Everything falls at the same speed.”

Me: So, do you think that’s weird?

Students: No.

Me: But, this book… I can feel the heaviness in my hand.  And this pencil, I can barely feel it at all.  It feels like the book is pulling harder downward on my hand than the pencil is.  Why wouldn’t that affect the speed of the fall?”

Student: “It’s not actually pulling harder.  It just feels that way, but that’s weight, not mass.”

Me: (weeps quietly)

Please don’t lecture me about the physics.  I’m aware.  Please also don’t lecture me about the terrible fake-Socratic-teaching I’m doing in that example dialogue.  I’m aware of that too.  I’m just saying that students often perceive these to contradict their lived experience, and research shows that outside of classrooms, even those who said the right things on the test usually go right back to thinking what they thought before.

And no, I’m not comparing the role of teachers to the role of Presidents or Prime Ministers.  I do realize they’re different.

Should I Conclude Any of These Things?

  1. Students’ ability to fail to retain or synthesize things that don’t make sense to them is actually a healthful and critically needed form of resistance.
  2. When teachers complain about students and “just memorizing what they need for the test and forgetting it after, without trying to really digest the material,” what we are complaining about is their fascism-prevention mechanism
  3. Teachers have the opportunity to be the “warm up,” the “opening act” — the small-scale practice ground where young minds practice repeating things they don’t believe, thinking they can safely forget them later.
  4. Teachers have the opportunity to be the “innoculation” — the small-scale practice ground where young minds can practice “honoring their dissatisfaction” in a way that, if they get confident with it, might have a chance at saving their integrity, their souls, and their democracy.

Extension Problem

Applying this train of thought to the conventional ways of doing corporate diversity training is left as an exercise for the reader.

 

I wrote last month about new approaches I’m using to find out what students think, keep track of who thinks what, and let the curriculum be guided by student curiosity. When Dan Meyer reblogged it recently, an interesting conversation started in the comments on that site.  The question seems to be, “how is this different from common practises?”  It sparked my thinking, so I thought I’d continue here.  If you’re a new reader, welcome.

Just Formative Assessment?

It may be helpful to know that I’m teaching a community college course on the basics of electricity.  The students come in brimming with questions, assumptions, and ideas about how electricity works in their lives — phone chargers, car batteries, electric fences, solar panels.  And all new knowledge gets judged immediately in that court of everyday life. What I’m trying to do better is to discover students’ pre-existing ideas and questions, especially the ones I wouldn’t have anticipated.

I agree that there is a way in which this is nothing new; in a way, it’s the definition of formative assessment.

Many formative assessments inquire into students’ thinking as a T/F question: did they get it, yes or no?  Others ask the question as if it’s multiple choice: are their ideas about motion Aristotelian, Newtonian, or something else? (See Hestenes’ work leading to the Force Concept Inventory).  Some assessments focus on misconceptions: which of these mistaken ways of thinking are causing their problems?  Typically there is some instruction or exercise or activity, and then we try to find out what they got out of it.  Or maybe it’s a pre-assessment, and we use the information to address and correct misconceptions.

I’m trying to shift to essay questions:  Not “Do they think correctly” but “What do they think?”  I’m trying to shift it to a different domain: not “what do they think about how this topic was just taught in this class” but “what have they ever thought about this topic, in all the parts of their lives, and how can we weave them together?”  I also hope to ask it for a different reason: not just, “which parts of their ideas are correct”  but also “which parts of their pre-existing ideas are most likely to lead to insight or perplexity?

As Dan points out, there is a “part 2”: This isn’t just about shifting what I do (keep a spreadsheet where I record student ideas and questions, tagged by topic and activity they were working on when they asked it).  It’s also about shifting my self-assessment.  The best activities aren’t just the ones that help students solve problems; the best assessments yield the most honest student thinking.

Which of the activities in your curriculum would you rank highest on that scale?

What do you think makes them work?

Pros: Student Honesty and Motivation

This year, I’ve got a better handle not only on who holds which ideas,  which ideas are half-digested, applied inconsistently or in a self-contradictory way, and what the students are curious about.

The flashlights you shake to charge — do they work like how friction can transfer electrons from a cat’s fur to a glass rod?

What happens if you try to charge a battery but the volts are lower than the battery you’re trying to charge?

Batteries’ don’t get heavier when you charge them — that is evidence that electrons don’t weigh much.

For example, if I was looking for a way into the superposition theorem, I couldn’t ask for better than this.

Cons: Fear and Conflict

I’ve written extensively about the fear, anger, conflict, and defensiveness that come to the surface when I encourage students to build a practise of constant re-evaluation, rather than certainty.   What are your suggestions for helping students re-evaluate things when they’re sure they already know it? What are your suggestions for helping students notice when common sense pre-conceptions and new ideas aren’t talking to each other?

Bonus points: what are your suggestions for helping teachers re-evaluate things when we’re sure we already know it?  What about for helping teachers notice when our common sense pre-conceptions and new ideas aren’t talking to each other?

Why Am I Obsessed With This?

This is the fear that keeps me awake at night:

The students in the first example had learned in class not to discuss certain aspects of their own ideas or models. In particular, they had learned not to talk about “What things are like?” …

The students in my second and third examples had learned that their ideas were worthless (and confusing to think about).

The problem with (some) guided inquiry like this is the illusion of learning. Instructors doing these kinds of “check outs” can convince themselves that students are building powerful scientific models, but really students are just learning not to share any ideas that might be wrong, not to have conversations that they aren’t supposed to have, and to hide interesting questions and insights that are outside the bounds of the “guided curriculum”.

At the end of the day, if students are learning to avoid taking intellectual risks around the instructor, that instructor doesn’t stand a chance of helping those students learn.

(Read the whole thing from Brian Frank)

Which kinds of assessments do you think discourage students from taking intellectual risks around the instructor?  My gut feeling is that anything along the lines of “elicit-confront-resolve” is a major contributor, but I hope that having more data to look at will help me confirm this.

Pros: I Get Honest and Motivated Too

To be clear, I’m not suggesting that no one else has ever done this.  It’s common to ask students “how were you thinking it through”, such as when discussing a mistake they made on a test.

I don’t want to just do it, though.  I want to do it better than I did last year.  I want to systematically keep track of student ideas and, together with the students, use those ideas to co-create the curriculum. Even the wrong ideas.  Especially the wrong ideas.  I want them to see what’s good in their well-thought out, evidence-based “wrong” answers, and see what’s weak about poorly thought out, unsubstantiated “right” answers.  I want them to do the same for the ideas of their classmates, especially the ideas they don’t share.

It means that sometimes we go learn about a different topic.  If they’re generating curiosity and insight about parallel circuits, I’m not going to force them to shift to series circuits.  It wastes momentum (not to mention goodwill… or what you might call “engagement” or “motivation”).  They know what the goal of the course is; they’ve paid good money and invested their time in reaching that goal.  We come up with a plan together of what it makes sense to learn about next, so that we move closer to the goal.

Want to help me improve?  Here’s the help I could really use.   If you were one of the people whose first reaction to my original post was “I already know that” — either I already know that to be true, or I already know that to be false… what would have helped you respond with curiosity and perplexity, adding your idea as a valuable one of many?  If that was your response, what made it work?

I’ve been looking for new ways every year to turn over a bit more control to the students, to help them use that control well, and to strike a balance between my responsibility to their safety (in their schoolwork and their future jobs) with my responsibility to their personal and collective self-determination.

One tiny change I made this year is to use more “portfolio-style” assessments.  If you work for the same institution I do, you know that “portfolio” can mean a bewildering variety of things… I’m using it here in the concrete sense used by artists and architects.  So far this semester, that looks like doing in-class exercises where students work on 3-5 examples of the same thing. For example, our first lab about circuits required students to hook up 3 circuits, using batteries, light bulbs, and switches, and draw what they had built.  On the second lab day, I asked them to build the same circuits again, based on their sketches, and add measurements of voltage, current, and resistance.  On the third day, they practised interpreting the results, using sentence prompts.

But the “assignment” wasn’t “hook up a circuit.”  The skills I was assessing were “Interpret ohmmeter result”, “Interpret voltmeter results”, “Document a circuit”, etc.  So I asked them to choose from among the circuits they had worked on, and let me know which one (or two) best showed their abilities.

I haven’t reviewed the submissions yet, but I’m anticipating that they’ll need feedback not only on the skill of interpreting a circuit but also on the skill of self-assessment.

In support of this, I’ve had students evaluate the data gathered by the entire class.  Part of my hope is that seeing each other’s work and noticing what makes it easier or harder to make sense of will help them better assess their own work.  What suggestions do you have for helping students get better at choosing which of their work best demonstrates their skills?

SBG superhero

I stole this graphic from Kelly O’Shea. If you haven’t already, click through and read her whole blog.

By last winter, the second year students were pretty frustrated.  They were angry enough about the workload to go to my department head about it.  The main bone of contention seemed to be that they had to demonstrate proficiency in things in order to pass (by reassessing until their skills met the criteria), unlike in some other classes where actual proficiency was only required if you cared about getting an A.  Another frequently used argument was, “you can get the same diploma for less work at [other campus.]” Finally, they were angry that my courses were making it difficult for them to get the word “honours” printed on their diploma.  *sigh*

It was hard for me to accept, especially since I know how much that proficiency benefits them when competing for and keeping their first job.  But, it meant I wasn’t doing the Standards-Based Grading sales pitch well enough.

Anyway, no amount of evidence-based teaching methods will work if the students are mutinous.  So this year, I was looking for ways to reduce the workload, to reduce the perception that the workload is unreasonable, and to re-establish trust and respect.  Here’s what I’ve got so far.

1. When applying for reassessment, students now only have to submit one example of something they did to improve, instead of two.  This may mean doing one question from the back of the book.  I suspect this will result in more students failing their reassessments, but that in itself may open a conversation

2. I’ve added a spot on the quiz where students can tell me whether they are submitting it for evaluation, or just for practise.  If they submit it for practise, they don’t have to submit a practise problem with their reassessment application, since the quiz itself is their practise problem.  They could always do this before, but they weren’t using it as an option and just pressuring themselves to get everything right the first time.   Writing it on the quiz seems to make it more official, and means they have a visible reminder each and every time they write a quiz.  Maybe if it’s more top-of-mind, they’ll use it more often.

3. In the past, I’ve jokingly offered “timbit points” for every time someone sees the logic in a line of thinking they don’t share.  At the end of the semester, I always bring a box of timbits in to share on the last day.  In general, I’m against bribery, superficial gamification (what’s more gamified than schooling and grades??), and extrinsic motivation, but I was bending my own rules as a way to bring some levity to the class.  But I realized I was doing it wrong.  My students don’t care about timbits; they care about points.  My usual reaction to this is tight-lipped exasperation.  But my perspective was transformed when Michael Doyle suggested a better response: deflate the currency.

So now, when someone gives a well-thought-out “wrong” answer, or sees something good in an answer they disagree with, they get “critical thinking points“.  At the end of the semester, I promised to divide them by the number of students and add them straight onto everyone’s grade, assuming they completed the requirements to pass.  I’m giving these things out by the handful.  I hope everybody gets 100.  Maybe the students will start to realize how ridiculous the whole thing is; maybe they won’t.  They and I still have a record of which skills they’ve mastered;  and it’s still impossible to pass if they’re not safe or not employable. Since their grades are utterly immaterial to absolutely anything, it just doesn’t matter.  And it makes all of us feel better.

In the meantime, the effect in class has been borderline magical.  They are falling over themselves exposing their mistakes and the logic behind them, and then thanking and congratulating each other for doing it — since it’s a collective fund, every contribution benefits everybody.  I’m loving it.

4. I’ve also been sticking much more rigidly to the scheduling of when we are in the classroom and when we are in the shop.  In the past, I’ve scheduled them flexibly so that we can take advantage of whatever emerges from student work.  If we needed classroom time, we’d take it, and vice versa.  But in a context where people are already feeling overwhelmed and anxious, one more source of uncertainty is not a gift.  The new system means we are sometimes in the shop at times when they’re not ready.  I’m dealing with this by cautiously re-introducing screencasts — but with a much stronger grip on reading comprehension comprehension techniques.  I’m also making the screencast information available as a PDF document and a print document.  On top of that, I’m adopting Andy Rundquist’s “back flip” techniquescreencasts are created after class in order to answer lingering questions submitted by students.  I hope that those combined ideas will address the shortcomings that I think are inherent in the “flipped classroom.”  That one warrants a separate post — coming soon.

The feedback from the students is extremely positive.  It’s early yet to know how these interventions affect learning, but so far the students just seem pleased that I’m willing to hear and respond to their concerns, and to try something different.  I’m seeing a lot of hope and goodwill, which in themselves are likely to make learning (not to mention teaching) a bit easier.  To be continued.

Last week, I presented a 90-minute workshop on Assessment Survival Skills during a week-long course on Assessing and Evaluating.  Nineteen people attended the workshop.  Sixteen were from the School of Trades and Technology (or related fields in other institutions).  There were lively small-group discussions about applying the techniques we discussed.

Main Ideas

  1. Awesome lesson plans can help students learn, but so can merely decent lesson plans given by well-rested, patient teachers
  2. If grading takes too long, try techniques where students correct the mistakes or write feedback to themselves
  3. If they don’t use feedback that you provide, teach students to write feedback, for themselves or each other
  4. If students have trouble working independently in shops/labs, try demonstrating the skill live, creating partially filled note-taking sheets, or using an inspection rubric
  5. If you need more or better activities and assignments quickly, try techniques where students choose, modify, or create questions based on a reference book, test bank, etc.
  6. If students are not fully following instructions, try handing out a completed sample assignment, demonstrating the skill in person, inspection reports, or correction assignments

When I asked for more techniques, the idea of challenging students to create questions that “stump the teacher” or “stump your classmates” came up twice.  Another suggestion was having students get feedback from employers and industry representatives.

Participants’ Questions

At the beginning of the workshop, participants identified these issues as most pressing.

New Doc 47_1

Based on that, I focused mostly on helping students do their own corrections/feedback (#3), and how to generate practice problems quickly (#5).  Interestingly, those were the two ideas least likely to rate a value rating of 5/5 on the feedback sheets — but the most often reported as “new ideas”.  I think I did the right thing by skipping the techniques for helping students follow instructions (#6), since that was the idea people were most likely to describe as one they “already use regularly.” Luckily, the techniques I focused on are very similar to the techniques for addressing all the concerns, except for a few very particular techniques about reducing student dependence on the instructor in the shop/lab (#4), which I discussed separately.  I received complete feedback sheets from 18 participants and 16 of them identified at least one idea as both new and useful, so I’ll take that as a win.  Also, I got invited to Tanzania!

Participants talked a lot about what it’s like to have students who all have different skills, abilities, and levels of experience.  Another hot topic was how to deal with large amounts of fairly dry theory.  We talked a lot about techniques that help students assess their skills and choose what content they need to work on, so that students at all levels can challenge and scaffold themselves.  We also talked about helping students explore and choose and what format they want to use to do that, as a way of increasing engagement with otherwise dry material.  I didn’t use the term, but I was curious to find out in what ways Universal Design for Learning might be the answer to questions and frustrations that instructors already have.  If I ever get the chance, as many participants requested, to expand the workshop, I think that’s the natural next step.

Feedback About the Workshop

Overall feedback was mostly positive. Examples (and numbers of respondents reporting something similar):

“Should be a required course”

“I liked the way you polled the class to find out what points to focus on,” “tailored,” “customized” (4)

“Well structured,” “Interactive” (7)

“Should be longer” (11)

“Most useful hour and a half so far” (4)

Feedback About Handout

“If someone tries to take this from me, there’s gonna be a fight!”

Feedback About Me

“Trade related information I can relate to” (4)

“High energy,” “fun,” “engaging,” “interesting” (5)

“You were yourself, didn’t feel scripted,” “Loved your style,” “Passionate” (3)

“That’s the tradesperson coming out!”

XKCD Comic: The erratic feedback from a randomly-varying wireless signal can make you crazy.I’m thinking about how to make assessments even lower stakes, especially quizzes.  Currently, any quiz can be re-attempted at any point in the semester, with no penalty in marks.  For a student who’s doing it for the second time, I require them to correct their quiz (if it was a quiz) and complete two practise problems, in order to apply for reassessment. (FYI, it can also be submitted in any alternate format that demonstrates mastery, in lieu of a quiz, but students rarely choose that option).

The upside of requiring practise problems is eliminating the brute-force approach where students just keep randomly trying quizzes thinking they will eventually show mastery (this doesn’t work, but it wastes a lot of time).  It also introduces some self-assessment into the process.  We practise how to write good-quality feedback, including trying to figure out what caused them to make the mistake.

The downside is that the workload in our program is really unreasonable (dear employers of electronics technicians, if you are reading this, most  hard-working beginners cannot go from zero to meeting your standards in two years.  Please contact me to discuss).  So, students are really upset about having to do two practise problems.  I try to sell it as “customized homework” — since I no longer assign homework practise problems, they are effectively exempting themselves from any part of the “homework” in areas where they have already demonstrated proficiency.  The students don’t buy it though.  They put huge pressure on themselves to get things right the first time, so they won’t have to do any practise.  That, of course, sours our classroom culture and makes it harder for them to think well.

I’m considering a couple of options.  One is, when they write a quiz, to ask them whether they are submitting it to be evaluated or just for feedback.  Again, it promotes self-assessment: am I ready?  Am I confident?  Is this what mastery looks and feels like?

If they’re submitting for feedback, I won’t enter it into the gradebook, and they don’t have to submit practise problems when they try it next (but if they didn’t succeed that time, it would be back to practising).

Another option is simply to chuck the practise problem requirement.  I could ask for a corrected quiz and good quality diagnostic feedback (written by themselves to themselves) instead.  It would be a shame, the practise really does benefit them, but I’m wondering if it’s worth it.

All suggestions welcome!

Here are the resources I’ll be using for the Peer Assessment Workshop.

Participant Handout

Participants will work through this handout during the workshop.  Includes two practice exercises: one for peer assessment of a hands-on task, and one for peer assessment of something students have written.  Click through to see the buttons to download or zoom.

 

Feel free to download the Word version if you like.

Workshop Evaluation

This is the evaluation form participants will complete at the end of the workshop.   I really like this style of evaluation; instead of asking participants to rank on a scale of 1-5 how much they “liked” something, it asks whether it’s useful in their work, and whether they knew it already.   This gives me a lot more data about what to include/exclude next time.  The whole layout is cribbed wholesale, with permission, from Will At Work Learning.  He gives a thorough explanation of the decisions behind the design; he calls it a “smile sheet”, because it’s an assessment that “shows its teeth.”

Click through to see the buttons to download or zoom.

 

Feel free to download the Word version if you like.

Other Stuff

In case they might be useful, here are my detailed presentation notes.

I wrote recently about creating a rubric to help students analyze their mistakes.  Here are some examples of what students wrote — a big improvement over “I get it now” and “It was just a stupid mistake.”

The challenge now will be helping them get in the habit of doing this consistently.  I’m thinking of requiring this on reassessment applications.  The downside would be a lot more applications being returned for a second draft, since most students don’t seem able to do this kind of analysis in a single draft.

Understand What’s Strong

  • “I thought it was a parallel circuit, and my answer would have been right if that was true.”

  • “I got this question wrong but I used the idea from the model that more resistance causes less current and less current causes less power to be dissipated by the light bulbs.”

  • “The process of elimination was a good choice to eliminate circuits that didn’t work.”

  • “A good thing about my answer is that I was thinking if the circuit was in series, the current would be the same throughout the circuit.”

 

Diagnose What’s Wrong

  • “The line between two components makes this circuit look like a parallel circuit.”

  • “What I don’t know is, why don’t electrons take the shorter way to the most positive side of the circuit?”

  • “I made the mistake that removing parallel branches would increase the remaining branches’ voltage.”

  • “What I didn’t realize was that in circuit 2, C is the only element in the circuit so the voltage across the light bulb will be the battery voltage, just like light bulb A.”

  • “I looked at the current in the circuit as if the resistor would decrease the current from that point on.”

  • “I think I was thinking of the A bulb as being able to move along the wire and then it would be in parallel too.”

  • “What I missed was that this circuit is a series-parallel with the B bulb in parallel with a wire, effectively shorting it out.”

  • “What I did not realize at first about Circuit C was that it was a complete circuit because the base of the light bulb is in fact metal.”

  • “I thought there would need to be a wire from the centre of the bulb to be a complete circuit.”

  • “I wasn’t recognizing that in Branch 2, each electron only goes through one resistor or the other.  In Branch 1, electrons must flow through each resistor.”

  • “I was comparing the resistance of the wire and not realizing the amount of distance electrons flowed doesn’t matter because wire has such low resistance either way.”

  • “My problem was I wasn’t seeing myself as the electrons passing through the circuit from negative to positive.”

 

Improve

  • “In this circuit, lightbulb B is shorted so now all the voltage is across light bulb A.”

  • “When there is an increase in resistance, and as long as the voltage stays constant, the current flowing through the entire circuit decreases.”

  • “After looking into the answer, I can see that the electrons can make their way from the bottom of the battery to the middle of the bulb, then through the filament, and back to the battery, because of metal conducting electrons.”

  • “To improve my answer, I could explain why they are in parallel, and also why the other circuits are not parallel.”

  • “I can generalize this by saying in series circuits, the current will stay the same, but in parallel circuits, the current may differ.”

  • “From our model, less resistance causes more current to flow.  This is a general idea that will work for all circuits.”

This year I’ve really struggled to get conversation going in class.  I needed some new ways to kick-start the questioning, counter-example-ing, restating, and exploring implications that fuel inquiry-based science.  I suspected students were silent because they were afraid that their peers and/or I would find out what they didn’t know.  I needed a more anonymous way for them to ask questions and offer up ideas.

About that time, I read Mark Guzdial’s post about Peer Instruction in Computer Science.  While exploring the resources he recommends, I found this compelling and very short PI teacher cheat sheet. I was already curious because Andy Rundquist and Joss Ives were blogging about interesting ways to use PI, even with small groups.  I hadn’t looked into it because, until this year, I’ve never been so unsuccessful in fostering discussion.

The cheat-sheet’s clarity and my desperation to increase in-class participation made me think about it differently.  I realized I could adapt some of the techniques, and it worked — I’ve had a several-hundred-percent increase in students asking questions, proposing ideas, and taking part in scientific discourse among themselves.    Caveat: what I’m doing does not follow the research model proposed by PI’s proponents.  It just steals some of their most-easily adopted ideas.

What is Peer Instruction (PI)?

If you’re not familiar with it, the basic idea is that students get the “lecture” before class (via readings, screencasts, etc), then spend class time voting on questions, discussing in small groups, and voting again as their understanding changes.  Wikipedia has a reasonably clear and concise entry on PI, explaining the relationship between Peer Instruction, the “flipped classroom”, and Just-In-Time Teaching.

Why It’s Not Exactly PI

My home-made voting flashcards

My home-made voting flashcards

  • I don’t have clickers, and don’t have any desire for them.  If needed, I use home-made voting cards instead.  Andy explains how effective that can be.
  • I prefer to use open-ended problems, sometimes even problems the students can’t solve with their current knowledge, rather than multiple-choice questions.  That’s partly because I don’t have time to craft good-quality MC items, partly because I want to make full use of the freedom I have to follow students’ noses about what questions and potential answers are worth investigating.
  • Update (Feb 19): I almost forgot to mention, my classroom is not flipped.  In other words, I don’t rely on before-class readings, screencasts, etc.

What About It is PI-Like?

  1. I start with a question for students to tackle individually.  Instead of multiple-choice, it could be a circuit to analyze, or I might ask them to propose a possible cause for a phenomenon we’ve observed.
  2. I give a limited amount of time for this (maybe 2-3 minutes), and will cut it even shorter if 80% of students finish before the maximum time.
  3. I monitor the answers students come up with individually.  Sometimes I ask for a vote using the flashcards.  Other times I just circulate and look at their papers.
  4. I don’t discuss the answers at that point.  I give them a consistent prompt: “In a moment, not right now but in a moment, you’re going to discuss in groups of 4.  Come to agreement on whatever you can, and formulate questions about whatever you can’t agree on.  You have X minutes.  Go.”
  5. I circulate and listen to conversations, so I can prepare for the kinds of group discussion, direct instruction, or extension questions that might be helpful.
  6. When we’re 30 seconds from the end, or when the conversation starts to die down, I announce “30 more seconds to agree or come up with questions.”
  7. Then, I ask each group to report back.  Usually I collect all the questions first, so that Group B doesn’t feel silenced if their question is answered by Group A’s consensus. Occasionally I ask for a flashcard vote at this point; more often, collect answers from each group verbally. I write them on the board — roughly fulfilling the function of “showing the graph” of the clicker results.
  8. If the answers are consistent across the group and nothing needs to be clarified, I might move on to an extension question.  If something does need clarification, I might do some direct instruction.  Either way, I encourage students to engage with the whole group at this point.

Then we’re ready to move on — maybe with another round, maybe with an extension question (the cheat-sheet gives some good multi-purpose prompts, like “What question would make Alternate Answer correct?”).  I’m also a fan of “why would a reasonable person give Alternate Answer?”

Why I Like It

It doesn’t require a ton of preparation.  I usually plan the questions I’ll use (sometimes based on their pre-class reading which, in my world, actually in-class reading…).  But, anytime during class that I feel like throwing a question out to the group, I can do this off the cuff if I need to.

During the group discussion phase (Step 4), questions and ideas start flowing and scientific discourse flourishes.  Right in this moment, they’re dying to know what their neighbour got, and enjoy trying to convince each other.  I don’t think I buy the idea that these techniques help because students learn better from each other — frankly, they’re at least as likely to pseudoteach each other as I am.  I suspect that the benefit comes not so much from what they hear from others but from what they formulate for themselves.   I wish students felt comfortable calling that stuff out in a whole group discussion (with 17 of us in the room, it can be done), but they don’t.  So.  I go with what works.

No one outside the small group has to know who asked which questions.  The complete anonymity of clickers isn’t preserved, but that doesn’t seem to be a problem so far.

Notes For Improvement

There are some prompts on the cheat sheet that I could be using a lot more often — especially replacing “What questions do you have” or “What did you agree on” with “What did you group talk about,” or “If your group changed its mind, what did you discuss?”

There’s also a helpful “Things Not To Do (that seemed like a good idea at the time)” page that includes my favourite blooper — continuing to talk about the problem after I’ve posed the question.

If I was to add something to the “What Not To Do” list, it would be “Shifting/pacing while asking the question and immediately afterwards.”  I really need to practice holding still while giving students a task, and then continuing to hold still until they start the task.   My pacing distracts them and slows down how quickly they shift attention to their task; and if I start wandering the room immediately, it creates the impression that they don’t have to start working until I get near enough to see their paper.

Archives

I’M READING ABOUT