You are currently browsing the category archive for the ‘Professional Development’ category.

As the year winds down, I’m starting to pull out some specific ideas that I want to work on over the summer/next year.  The one that presses on me the most is “readiness.”  In other words,

  • What is absolutely non-negotiable that my students should be able to do or understand when they graduate?
  • How to I make sure they get the greatest opportunity to learn those things?
  • How do I make sure no one graduates without those things?  And most frustratingly,
  • How do I reconcile the student-directedness of inquiry learning with the requirements of my diploma?

Some people might disagree that some of these points are worth worrying about.  If you don’t teach in a trade school, these questions may be irrelevant or downright harmful.  K-12 education should not be a trade school.  Universities do not necessarily need to be trade schools (although arguably, the professional schools like medicine and law really are, and ought to be).  However, I DO teach in a trade school, so these are the questions that matter to me.

Training that intends to help you get a job is only once kind of learning, but it is a valid and important kind of learning for those who choose it.  It requires as much rigour and critical thinking as anything else, which becomes clear when we consider the faith we place in the electronics technicians who service elevators and aircraft. If my students are inadequately prepared in their basic skills, they (or someone else, or many other people) may be injured or die. Therefore, I will have no truck with the intellectual gentrification that thinks “vocational” is a dirty word. Whether students are prepared for their jobs is a question of the highest importance to me.

In that light, my questions about job-readiness have reached the point of obsession.  Being a technician is to inquire.  It is to search, to question, to notice inconsistencies, to distinguish between conditions that can and cannot possibly be the cause of particular faults.  However, teaching my students to inquire means they must inquire.  I can’t force it to happen at a particular speed (although I can cut it short, or offer fewer opportunities, etc.).  At the same time, I have given my word that if they give me two years of their time, they will have skills X, Y, and Z that are required to be ready for their jobs.  I haven’t found the balance yet.

I’ll probably write more about this as I try to figure it out.  In the meantime, Grant Wiggins is writing about a recent study that found a dramatic difference between high-school teachers’ assessment of students’ college readiness, and college profs’ assessment of the same thing.  Wiggins directs an interesting challenge to teachers: accurately assess whether students are ready for what’s next, by calibrating our judgement against the judgement of “whatever’s next.”  In other words, high school teachers should be able to predict what fraction of their students are adequately prepared for college, and that number should agree reasonably well with the number given by college profs who are asked the same question.  In my case, I should be able to predict how well prepared my students are for their jobs, and my assessment should match reasonably the judgement of their first employer.

In many ways I’m lucky: we have a Program Advisory Group made up of employer representatives who meet to let us know what they need. My colleagues and I have all worked between 15 and 25 years in our field. I send all my students on 5-week unpaid work terms.  During and after the work terms, I meet with the student and the employer, and get a chance to calibrate my judgement.  There’s no question that this is a coarse metric; the reviews are influenced by how well the student is suited to the culture of a particular employer, and their level of readiness in the telecom field might be much higher than if they worked on motor controls.  Sometimes employers’ expectations are unreasonably high (like expecting electronics techs to also be mechanics).  There are some things employers may or may not expect that I am adamant about (for example, that students have the confidence and skill to respond to sexist or racist comments).    But overall, it’s a really useful experience.

Still, I continue to wonder about the accuracy of my judgement.  I also wonder about how to open this conversation with my colleagues.  It seems like something it would be useful to work on together.  Or would it?  The comments on Wiggins’ post are almost as interesting as the post itself.

It seems relevant that most commenters are responding to the problem of students’ preparedness for college, while Wiggins is writing about a separate problem: teachers’ unfounded level of confidence about students’ preparedness for college.

The question isn’t, “why aren’t students prepared for college.”  It’s also not “are college profs’ expectations reasonable.”  It’s “why are we so mistaken about what college instructors expect?

My students, too, often miss this kind of subtle distinction.  It seems that our students aren’t the only ones who suffer from difficulty with close reading (especially when stressed and overwhelmed).

Wiggins calls on teachers to be more accurate in our assessment, and to calibrate our assessment of college-readiness against actual college requirements. I think these are fair expectations.  Unfortunately, assessment of students’ college-readiness (or job-readiness) is at least partly an assessment of ourselves and our teaching.

A similar problem is reported about college instructors.  The study was conducted by the Foundation for Critical Thinking with both education faculty and subject-matter faculty who instruct teacher candidates. They write that many profs are certain that their students are leaving with critical thinking skills, but that most of those same profs could not clearly explain what they meant by critical thinking, or give concrete examples of how they taught it.

Self-assessment is surprisingly intractable; it can be uncomfortable and can elicit self-doubt and anxiety.  My students, when I expect them to assess their work against specific criteria, exhibit all the same anger, defensiveness, and desire to change the subject as seen in the comments.  Most of them literally can’t do it at first.  It takes several drafts and lots of trust that they will not be “punished” for admitting to imperfection.  Carol Dweck’s work on “growth mindset” comes to mind here… is our collective fear of admitting that we have room to grow a consequence of “fixed mindset”?  If so, what is contributing to it? In that light, the punitive aspects of NCLB (in the US) or similar systemic teacher blaming, isolation, and lack of integrated professional development may in fact be contributing to the mis-assessment reported in the study, simply by creating lots of fear and few “sandboxes” of opportunity for development and low-risk failure.  As for the question of whether education schools are providing enough access to those experiences, it’s worth taking a look at David Labaree’s “The Trouble with Ed School.”

One way to increase our resilience during self-assessment is to do it with the support of a trusted community — something many teachers don’t have.  For those of us who don’t, let’s brainstorm about how we can get it, or what else might help.  Inaccurate self-assessment is understandable but not something we can afford to give up trying to improve.

I’m interested in commenter I Hodge’s point about the survey questions.  The reading comprehension question allowed teachers to respond that “about half,” “more than half,” or “all, or nearly all” of their students had an adequate level of reading comprehension.  In contrast, the college-readiness question seems to have required a teacher to select whether their students were “well,” “very well,” “poorly,” or “very poorly” prepared.  This question has no reasonable answer, even if teachers are only considering the fraction of students who actually do make it to college.  I wonder why they posed those two questions so differently?

Last but not least, I was surprised that some people blamed college admissions departments for the admission of underprepared students.  Maybe it’s different in the US, but my experience here in Canada is that admission is based on having graduated high school, or having gotten a particular score in certain high school courses.  Whether under-prepared students got those scores because teachers under-estimated the level of preparation needed for college, or because of rigid standards or standardized tests or other systemic problems, I don’t see how colleges can fix, other than by administering an entrance test.  Maybe that’s more common than I know, but neither the school at which I teach nor the well-reputed university that I (briefly) attended had one.  Maybe using a high school diploma as the entrance exam for college/university puts conflicting requirements on the K-12 system?  I really don’t know the answer to this.

Wiggins recommends regularly bringing together high-school and college faculty to discuss these issues.  I know I’d be all for it.  There is surely some skill-sharing that could go back and forth, as well as discussions of what would help students succeed in college.  Are we ready for this?

Michael Pershan kicked my butt recently with a post about why teachers tend to plateau in skill after their third year, connecting it to Cal Newport’s ideas such as “hard practice” (and, I would argue, “deep work“).

Michael distinguishes between practice and hard practice, and wonders whether blogging belongs on his priority list:

“Hard practice makes you better quickly. Practice lets you, essentially, plateau. …

Put it like this: do you feel like you’re a 1st year teacher when you blog? Does your brain hurt? Do you feel as if you’re lost, unsure how to proceed, confused?
If not, you’re not engaged in hard practice.”

Ooof.  On one hand, it made me face my desire to avoid hard practice; I feel like I’ve spent the last 8 months trying to decrease how much I feel like that.  I’ve tried to create classroom procedures that are more reuseable and systematic, especially for labs, whiteboarding sessions, class discussions, and model presentations.

It’s a good idea to periodically take a hard look at that avoidance, and decide whether I’m happy with where I stand.  In this case, I am.  I don’t think the goal is to “feel like a first year teacher” 100% of the time; it’s not sustainable and not generative.  But it reminds me that I want to know which activities make me feel like that, and consciously choose some to seek out.

Michael makes this promise to himself:

It’s time to redouble my efforts. I’m half way through my third year, and this would be a great time for me to ease into a comfortable routine of expanding my repertoire without improving my skills.

I’m going to commit to finding things that are intellectually taxing that are central to my teaching.

It made me think about what my promises are to myself.

Be a Beginner

Do something every summer that I don’t know anything about and document the process.  Pay special attention to how I treat others when I am insecure, what I say to myself about my skills and abilities, and what exactly I do to fight back against the fixed-mindset that threatens to overwhelm me.  Use this to develop some insight into what exactly I am asking from my students, and to expand the techniques I can share with them for dealing with it.

Last summer I floored my downstairs.  The summer before that I learned to swim — you know, with an actual recognizable stroke.  In both cases, I am proud of what I accomplished.  In the process, I was amazed to notice how much concentration it took not to be a jerk to myself and others.

Learn More About Causal Thinking

I find myself being really sad about the ways my students think about causality.  On one hand, I think my recent dissections of the topic are a prime example of “misconceptions listening” — looking for the deficit.  I’m pretty sure my students have knowledge and intuition about cause that I can’t see, because I’m so focused on noticing what’s going wrong.  In other words, my way of noticing students’ misconceptions is itself a misconception.  I’d rather be listening to their ideas fully, doing a better job of figuring out what’s generative in their thinking.

What to do about this? If I believe that my students need to engage with their misconceptions and work through them, then that’s probably what I need too. There’s no point in my students squashing their misconceptions in favour of “right answers”; similarly, there’s no point in me squashing my sadness and replacing it with some half-hearted “correct pedagogy.”

Maybe I’m supposed to be whole-heartedly happy to “meet my students where they are,” but if I said I was, I’d be lying. (That phrase has been used so often to dismiss my anger at the educational malpractice my students have endured that I can’t even hear it without bristling).  I need to midwife myself through this narrow way of thinking by engaging with it.  Like my students, I expect to hold myself accountable to my observations, to good-quality reasoning, to the ontology of learning and thinking, and to whatever data and peer feedback I can get my hands on.

My students’ struggle with causality is the puzzle from which my desire for explanation emerged; it is the source of the perplexity that makes me unwilling to give up. I hope that pursuing it honestly will help me think better about what it’s like when I ask my students to do the same.

Interact with New Teachers

Talking with beginning teachers is better than almost anything else I’ve tried for forcing me to get honest about what I think and what I do.  There’s a new teacher in our program, and talking things through with him has been a big help in crystallizing my thoughts (mutually useful, I think).  I will continue doing this and documenting it.  I also put on a seminar on peer assessment for first-year teachers last summer; it was one of the more challenging lesson plans I’ve ever written.  If I have another chance to do this, I will.

Work for Systemic Change

I’m not interested in strictly personal solutions to systemic problems.  I won’t have fun, or meet my potential as a teacher, if I limit myself to improving me.  I want to help my institution and my community improve, and that means creating conditions and communities that foster change in collective ways.  For two years, I tried to do a bit of this via my campus PD committee; for various reasons, that avenue turned out not to lead in the directions I’m interested in going.  I’ve had more success pressing for awareness and implementation of the Workplace Violence Prevention regulations that are part of my local jurisdiction’s Occupational Health and Safety Act.

I’m not sure what the next project will be, but I attended an interesting seminar a few months ago about our organization’s plans for change.  I was intrigued by the conversations happening about improving our internal communication.  I’ve also had some interesting conversations recently with others who want to push past the “corporate diversity” model toward a less ahistorical model of social justice or cultural competence.  I’ll continue to explore those to find out which ones have some potential for constructive change.

Design for Breaks

I can’t do this all the time or I won’t stay in the classroom.  I know that now.  As of the beginning of January, I’ve reclaimed my Saturdays.  No work on Saturdays.  It makes the rest of my week slightly more stressful, but it’s worth it.  For the first few weeks, I spent the entire day alternately reading and napping.  Knowing that I have that to look forward to reminds me that the stakes aren’t as high as they sometimes seem.

I’m also planning to go on deferred leave for four months starting next January.  After that, I’ve made it a priority to find a way to work half-time.   The kind of “intellectually taxing” enrichment that I need, in order for teaching to be satisfying, takes more time than is reasonable on top of a full-time job.  I’m not willing to permanently sacrifice my ability to do community volunteer work, spend time with my loved ones, and get regular exercise. That’s more of a medium-term goal, but I’m working a few leads already.

Anyone have any suggestions about what I should do with 4 months of unscheduled time starting January 2014?

I just received a notice from the American Society for Engineering Education about a free online PD project for faculty who teach introductory engineering science.  It’s called  Advancing Engineering Education Through Virtual Communities of Practice, and they’ve just extended the application deadline to Feb. 8. Participants can choose from these topics:

  • Electric circuits
  • Mechanics
  • Thermodynamics
  • Mass & energy balance

I can’t tell if you have to be a member of an engineering department, or if it’s enough to teach one of these topics; I can’t even tell if you have to be American.  In any case, I applied.  From what I can tell, accepted applicants participate in once-weekly online meetings with facilitators who have experience with “research-based instructional approaches” (though they don’t tell you which ones, except for references to “Outcome-Based Education” — which I think of as an assessment approach, not exactly an instructional approach).

I suppose I should be concerned about the lack of details on the website (even the application deadline on the front page hasn’t been changed to reflect the extension), but I’m chalking it up to this being the prototype run, and anyway, the price is right.  The informed consent form makes it clear that this is a research project to explore the viability of the model, which is fine by me.  It’ll be worth it if it leads to any of these things:

  1. Working on instructional changes in a systematic way (rather than the somewhat haphazard and occasionally accidental way I’ve been doing it so far)
  2. Focusing on the specific ways particular instructional approaches play out in circuits courses, not to mention deepening my content knowledge
  3. Having a consistent group to work with over the course of 6 months (and two different academic years).

It seems to bring together the advantages of something like the Global Physics Department, with the bonus that every meeting will be about exactly what I teach, and the meeting time will be a part of my scheduled workday.

The email I received from the ASEE contains details that are not available on the website, so I’m including it below.

NSF-funded project to develop engineering faculty virtual communities of practice

Engineering education research has shown that many research-based instructional approaches improve student learning but these have not diffused widely because faculty members find it difficult to acquire the required knowledge and skills by themselves and then sustain the on-going implementation efforts without continued encouragement and support.
ASEE with a grant from NSF is organizing several web-based faculty communities that will work to develop the group’s understanding of research-based instructional approaches and then support individual members as they implement self-selected new approaches in their classes.  Participants should be open to this new technology-based approach and see themselves as innovators in a new approach to professional development and continuous improvement.

The material below and the project website provide more information about these communities and the application process. Questions should be addressed to Rocio Chavela at

If you are interested in learning about effective teaching approaches and working with experienced mentors and collaborating colleagues as you begin using these in your classroom, you are encouraged to apply to this program. If you know of others that may be interested, please share this message with them.

Please consider applying for this program and encouraging potentially interested colleagues to apply. Applications are due by February 8, 2013.

Additional Details About the Program


Faculty groups, which will effectively become virtual communities of practice (VCP) with 20 to 30 members, will meet weekly at a scheduled time using virtual meeting software during the second half of the Spring 2013 Semester and during the entire Fall 2013 Semester. Each group will be led by two individuals that have implemented research-based approaches for improving student learning, have acquired a reputation for innovation and leadership in their course area, and have completed a series of training sessions to prepare them to lead the virtual communities. Since participants will be expected to begin utilizing some of the new approaches with the help and encouragement of the virtual group, they should be committed to teaching a course in the targeted area during the Fall 2013 Semester.

 VCP Topics and Meeting Times

This year’s efforts are focusing on the introductory engineering science courses and the list below shows the course areas along with the co-leaders and the scheduled times for each virtual community:

Electric Circuits
Co-leaders are Lisa Huettel and Kenneth Connor
Meeting time is Thursday at 1:30 – 3:00 p.m. EST starting March 21, 2013 and running until May 16, 2013

Engineering Mechanics
Co-leaders are Brian Self and Edward Berger
Meeting time is Thursday at 1:30 – 3:00 p.m. EST starting April 3, 2013 and running until May 16, 2013

Co-leaders are John Chen and Milo Koretsky
Meeting time is Wednesday at 2:00 – 3:30 p.m. EST starting April 3, 2013 and running until May 23, 2013

Mass and Energy Balance
Co-leaders are Lisa Bullard and Richard Zollars
Meeting time is Thursday at 12:30 – 2:00 p.m. EST starting March 21, 2013 and running until May 16, 2013

Application Process

Interested individuals should complete the on-line application at The application form asks individuals to describe their experience with introductory engineering science courses, to indicate their involvement in education research and development activities, to summarize any classroom experiences where they have tried something different in their classes, and to discuss their reasons for wanting to participate in the VCP.

The applicant’s Department Head or Dean needs to complete an on-line recommendation form to indicate plans for having the applicant teach the selected courses in the Fall 2013 Semester and to briefly discuss why participating in the VCP will be important to the applicant.

Since demonstrating that the VCP approach will benefit relatively inexperienced faculty, applicants do not need a substantial record of involvement in education research and development. For this reason, the applicant’s and the Department Head’s or Dean’s statements about the reasons for participating will be particularly important in selecting participants.

Application Deadline

Applications are due by February 8, 2013. The project team will review all applications and select a set of participants that are diverse in their experience, institutional setting, gender, and ethnicity.

Last month, I was asked to give a 1hr 15 min presentation on peer assessment to a group of faculty.  It was part of a week-long course on assessment and evaluation.  I was pretty nervous, but I think I managed to avoid most of the pitfalls. The feedback was good and I learned a lot from the questions people asked.

Some Examples of Feedback

“Hopefully by incorporating more peer assessment for the simple tasks will free up more of my time to help those who really need it as well as aiding me in becoming more creative instead of corrective”

“You practiced what you were preaching”

“The forms can be changed and used in my classes”

“Great facilitator — no jargon, plain talk, right to the point! Excellent.  Very useful.”

“You were great! I like you! Good job! (sorry about that)  :)”

“Although at first, putting some of the load on the learner may seem lazy on the part of the instructor, in actual fact, the instructor may then be able to do even more hands on training, and perhaps let thier creativity blossom when unburdened by “menial tasks”.”

“Needed more time”

“Good quality writing exercise was a bit disconnected”

“Finally a tradeswoman who can relate to the trades”

In a peer assessment workshop, participants’ assessments of me have the interesting property of also assessing them.  The comments I got from this workshop were more formative than I’m used to — there were few “Great workshop” type comments, and more specific language about what exactly made it good.  Of course, I loved the humour in the “You were great” comment shown above —  if someone can parody something, it’s pretty convincing evidence of understanding.  I also loved the comment about before-thinking and after-thinking, especially the insight into the fear of being lazy, or being seen as lazy.

Last but not least, I got a lot of verbal and non-verbal feedback from the tradespeople in the room.  They let me know that they were not used to seeing a tradesperson running the show, and that they really appreciated it.  It reinforced my impressions about the power of subtle cues that make people feel welcome or unwelcome (maybe a post for another day).


  1. Peer assessment is a process of having students improve their work based on feedback from other students
  2. To give useful feedback, students will need clear criteria, demonstrations of how to give good feedback, and opportunities for practice
  3. Peer assessment can help students improve their judgement about their own work
  4. Peer assessment can help students depend less on the teacher to solve simple problems
  5. Good quality feedback should include a clear statement of strengths and weaknesses, give specific ideas about how to improve, and focus on the student’s work, not their talent or intelligence
  6. Feedback based on talent or intelligence can weaken student performance, while feedback based on their work can strengthen it

I distributed this handout for people to follow.  I used three slides at the beginning to introduce myself (via the goofy avatars shown here) and to show the agenda.


I was nervous enough that I wrote speaking notes that are almost script-like.  I rehearsed enough that I didn’t need them most of the time.


Avoiding Pitfall #1: People feeling either patronized or left behind

I started with definitions of evaluation and assessment, and used flashcards to get feedback from the group about whether my definitions matched theirs.  I also gave everyday examples of assessment (informal conversations) and evaluation (quizzes) so that it was clear that, though the wording might sound foreign, “evaluation” and “assessment” were everyday concepts.  There were definitely some mumbled “Oh! That’s what they meant” comments coming from the tables, so I was glad I had taken a few minutes to review.  At the same time, by asking people if my definitions agreed with theirs, I let them know that I knew they might already have some knowledge.

Participants’ Questions

After introducing myself and the ideas, I asked the participants to take a few minutes to write if/how they use peer assessment so far, and what questions they have about peer assessment.  Questions fell into these categories:

  • How can I make sure that peer assessment is honest and helpful, not just a pat on the back for a friend, or a jab at someone they don’t like, or lashing out during a bad day?
  • What if students are too intimidated/unconfident to share their work with their peers?  (At least one participant worried that this could be emotionally dangerous)
  • Why would students buy in — what’s in it for the assessor?
  • When/for what tasks can it be used?
  • Logistics: does everyone participate?  Is it required? Should students’ names be on it?  Should the assessment be written?
  • How quick can it be?  We don’t have a lot of time for touchy-feely stuff.
  • Can this work with individualized learning plans, where no two students are at the same place in the curriculum?

Is Peer Assessment Emotionally Safe?

I really didn’t see these questions coming.  I was struck by how many people worried that peer assessment could jeopardize their students’ emotional well-being.  That point was raised by participants ranging from School of Trades to the Health & Human Services faculty.

It dawned on me while I was standing there that for many people, their only experience of peer assessment is the “participation” grade they got from classmates on group projects, so there is a strong association with how people feel about each other.  I pointed that out, and saw lots of head nodding.

Then I told them that the kind of peer assessment I was talking about specifically excluded judging people’s worth or discussing the reviewer’s feelings about the reviewee.  It also wasn’t about group projects.  We were going to assess solder joints, and I had never seen someone go home crying because they were told that a solder joint was dirty.  It was not about people’s feelings.  It was about their work. 

I saw jaws drop.  Some School of Trades faculty actually cheered.  It really gave me pause.  In these courses, and in lots of courses about education, instructors encourage us to “reflect,” and assignments are often “reflective pieces.”  I have typically interpreted “reflect” to mean “assess” — in other words, analyze what went well, what didn’t, why, and what to do about it.  My emotions are sometimes relevant to this process, and sometimes not.  I wonder how other people interpret the directive to “reflect.”  I’m starting to get the impression that at least some people think that instructors require them to “talk about your emotions,” with little strategy about why, what distinguishes a strong reflection from a weak one, or what it is supposed to accomplish.

How to get honest peer assessments?

I talked briefly about helping students generate useful feedback.  One tactic that I used a lot at the beginning of the year was to collect all the assessments before I handed them to the recipient.  The first few times, I wrote feedback on the feedback, passed it back to the reviewer, and had them do a second draft (based on definite criteria, like clarity, consistency, causality).  Later, I might collect and read the feedback before giving it back to the recipient.  I never had a problem with people being cruel, but if that had come up, it would have been easy enough to give it back to the reviewer (and have a word with them).

Another way to lower the intimidation factor is to have everyone assess everyone.  This gives students an incentive to be decent and maybe a bit less clique-ish, since all their classmates will assess them in return.  It also means that, even if they get some feedback from one person that’s hard to take, they will likely have a dozen more assessments that are quite positive and supportive.

Students are reluctant to “take away points” from the reviewee, so it helps that this feedback does not affect the recipient’s grade at all.  It does, however, affect the reviewer’s grade; reviewing is a skill on the skill sheet, so they must complete it sooner or later.  Students are quick to realize that it might as well be sooner.   Also, I typically do this during class time, so I had a roughly 100% completion rate last year.

How to get useful peer assessments?

I went ahead with my plan to have workshop participants think about solder joints.  A good solder joint is shiny, smooth, and clean.  It has to meet a lot of other criteria too, but these three are the ones I get beginning students to focus on.  I showed a solder joint (you can see it in the handout) and explained that it was shiny and clean but not smooth.

Then I directed the participants to an exercise in the handout that showed 8 different versions of feedback for that joint (i.e. “This solder joint is shiny and clean, but not smooth”), and we switched from assessing soldering to assessing feedback.  I asked participants to work through the feedback, determining if it met these criteria:

  1. Identifies strengths and weaknesses
  2. Gives clear suggestion about what to do next time
  3. Focusses on the student’s work, not their talent or intelligence

We discussed briefly which feedback examples were better than others (the example I gave above meets criteria 1 and 3, but not 2).  This got people sharing their own ideas about what makes feedback good. I didn’t try to steer toward any consensus here; I just let people know if I understood their point or not.  Very quickly, we were having a substantive discussion about quality feedback, even though most people had never heard of soldering before the workshop.  I suggested that they try creating an exercise like this for their own classroom, as a way of clarifying their own expectations about feedback.

Avoiding Pitfall #2: This won’t work in my classroom

Surprisingly, this didn’t come up at all.

I came back often to the idea that there are things students can assess for each other and there are things they need us for.  I made sure to reiterate often that each teacher would be the best judge of which tasks were which in their discipline.  I also invited participants to consider whether a student could fully assess that task, or could they only assess a few of the simpler criteria?  Which criteria?  What must the students necessarily include in their feedback?  What must they stay away from, and how is this related to the norms of their discipline?  We didn’t have time to discuss this.  If you were a participant in the workshop and you’re reading this, I’d love to hear what you came up with.

Pitfall #3: Disconnected/too long

Well, I wasn’t able to avoid this.  After talking about peer assessments for soldering and discussing how that might generalize to other performance tasks, I had participants work through peer assessment for writing. I told them that their classmate Robin Moroney had written a summary of a newspaper article (which is sort of true — the Wall Street Journal published Moroney’s summary of Po Bronson’s analysis of Carol Dweck’s research), and asked them to write Robin some feedback.  They used a slightly adjusted version of the Rubric for Assessing Reasoning that I use with my students (summarize, connect to your own experience, evaluate for clarity, consistency, causality).  We didn’t really have time to discuss this, so Dweck’s ideas got lost in the shuffle, and I was only able to nod toward the questions we’d collected at the beginning, encouraging people to come talk afterwards if their questions hadn’t been fully answered.

Questions that didn’t get answered:

Some teachers at the college use an “individualized system of instruction” — in other words, it is more like a group tutoring session than a class.  The group meets at a specified time but each student is working at their own pace.  I didn’t have time to discuss this with the teacher who asked, but I wonder if the students would benefit from assessing “fake” student work, or past students’ work (anonymized), or the teacher’s work?

One teacher mentioned a student who was adamant that peer assessment violated their privacy, that only the teacher should  see it.  I never ran into this problem, so I’m not sure what would work best.  A few ideas I might try: have students assess “fake” work at first, so they can get the hang of it and get comfortable with the idea, or remove names from work so that students don’t know who they’re assessing.  In my field, it’s pretty typical for people to inspect each other’s work; in fields where that is true, I would sell it as workplace preparation.

We didn’t get a chance to flush out decision-making criteria for which tasks would benefit from peer assessment.  My practice has been to assign peer assessment for tasks where people are demonstrating knowledge or skill, not attitude or opinion.  Mostly, that’s because attitudes and opinions are not assessable for accuracy.  (Note the stipulative definitions here… if we are discussing the quality of reasoning in a student’s work, then by definition the work is a judgment call, not an opinion).  I suppose I could have students assess each other’s opinions and attitudes for clarity  — not whether your position is right or wrong, but whether I can understand what your position is.   I don’t do this, and I guess that’s my way of addressing the privacy aspect; I’d have to have a very strong reason before I’d force people to share their feelings, with me or anyone else.

Obviously I encourage students to share their feelings in lots of big and small ways.  In practice, they do — quite a lot.  But I can’t see my way clear to requiring it.  Partly it’s because that is not typically a part of the discipline we’re in.  Partly it’s because I hate it, myself.  At best, it becomes inauthentic.   The very prospect of forcing people to share their feelings seems to make them want to do it less.  It also devalues students’ decision-making about their own boundaries — their judgment about when an environment is respectful enough toward them, and when their sharing will be respectful toward others.  I’m trying to help them get better at making those decisions themselves — not make those decisions for them.  Talking about this distinction during peer assessment exercises gives me an excuse to discuss the difference between a judgment and an opinion.  Judgments are fair game, and must be assessed for good-quality reasoning.  Opinions are feelings are not.  We can share them and agree or disagree with them, but I don’t consider that to be assessment.

Finally, a participant asked about how to build student buy-in.  Students might ask, what’s in it for me?  What I’ve found is that it only takes a round or two of peer assessments for students to start looking forward to getting their feedback from classmates.  They read it voraciously, with much more interest than they read feedback from me.  In the end, people love reading about themselves.

I’ve been asked to give a presentation, on Tuesday, to a group of new-ish community college teachers.  Since so many of my ideas are stolen reused with the kind permission of various blog authors, I thought I’d put my ideas out there for comments, suggestions, warnings, or admonishments…

The Audience

The workshop is part of a week-long course called Assessing and Evaluating Adult Learners that is mandatory for all new faculty at my school.  The participants will have zero, one, or at most two years of teaching experience.  Remember, they’re like me: no ed school degree, maybe no university degree.  Our school is not what the US considers a “community college” — Canada has no such thing as an Associate degree.  Our school offers one and two-year programs that range from plumbing to culinary arts to nail-care technician to office administration.  New faculty are hired based on their experience in the trade; for example, when I started teaching three years ago, I left a position as a sea-going design tech with the Canadian Coast Guard.

So we get hired, deal with the culture shock of leaving industry for an educational institution and, if we’re lucky, we have a summer to get organized.  That’s when people a do a little planning and take some of these week-long courses.  If we’re less lucky (like I was), we’re hired one day, in a classroom the next, and our unbelievably dedicated co-workers hold us up until the following summer when we can finally take a deep breath and get organized for the next go-around.  All new faculty are required to take 10 of these week-long courses within our first two years of employment.

The Workshop

I finished my 10 credits last summer; regular readers will be unsurprised that the facilitators marked me as an obsessive assessment geek.  They have asked me to offer a one-hour workshop about peer assessment.

Here are some of the ideas I have for the agenda.

0. Intros

I’ll ask each participant to introduce themselves with their name, program, experience using peer assessment, and any questions they have.  I’ll talk about the goals for the hour and the agenda.

1. What is peer assessment, and are you doing it already?

There are lots of simple or informal ways this can happen.  I’ll give examples and definitions, and explain my assumptions about terminology.  I’ll also explicitly ask whether they’ve used peer assessment.


  • Students inspect each other’s work in a shop class
  • Students compare and discuss their math assignment before handing it in
  • A student helps their classmate troubleshoot a lab that’s not working

2. Why use peer assessment?


  • I’d give tons of feedback on assignments, and students didn’t read it, or didn’t use it
  • Some students spent their shop time running to me every five minutes asking, “is this good?”
  • Some students couldn’t figure out when they were finished, or whether their work was good, or even what question to ask, so they kept fiddling with it endlessly instead of moving on to the next task
  • Students would hand in work without looking at the rubric
  • Students were afraid to try things that were unfamiliar


  • Peer-assessment helps students self-assess
  • More peer assessement and better self-assessment means that teacher-assessment can be focussed where it’s really needed

3.  What is good-quality assessment?

  1. Contains a specific diagnosis about what is well done and what should be improved
  2. Contains specific ideas about how to improve
  3. Given at the time that something can be done about it
  4.  Focuses on the student’s work, not their talent or intelligence

4. Practice: peer assessment of a performance task

I need a skill that’s simple and that we can all discuss together.  Since there is no clear overlap in our expertise, I’m planning to use a task that my students learn at the beginning of the program: how to inspect a solder joint, using a 3-point scale (smooth, shiny, clean).  This may be a mistake — it will increase cognitive load and threatens to bore anyone who feels alienated from “hands-on,” “skilled-trades” focussed concepts.  On the other hand, generic tasks like “riding a bike” can strike me as contrived and condescending.  I’ve got lots of slides of microscope close-ups of solder joints; I’ll show one, explain the rubric, and write some feedback, possibly using Jason Buell’s “sentence frames.”  I’ll have the participants assess my feedback on the 4-point scale above.  Then I’ll write some bad feedback, and ask them to improve it.

5. Practice: peer assessment of a writing task

For this, I’ll give them a short reading (probably The Praise A Child Should Never Hear, based on Carol Dweck’s research).  I’ll ask them to write feedback to the author, using the rubric for assessing reasoning that I’ve been using with my students. It asks readers to assess clarity, coherence, and cause.  It will probably need to be tweaked a bit so that it doesn’t refer to a physical model.

6. Review, questions

I’ll take questions and review the ideas that came up during the intros.  The handout package will contain some notes, examples of the worksheets (including extra copies of the rubric for assessing reasoning), a list of links and resources for further reading, as well as an evaluation sheet.  I’m experimenting with a new format of evaluation, cribbed from WillAtWorkLearning.  The draft so far is here.

The Booby-Traps


It can be hard sometimes to set a respectful tone in such a short time.  Some teachers will be brand new and have no experience to draw on, not even student teaching or practice teaching or what have you (remember, they’re coming straight from a professional kitchen, not ed school).  Others will have a couple of years under their belt, and be frustrated that I appear to be explaining peer assessment as if they’re not doing it already.  The only thing I can think of here is to ask at the beginning who is using it already and how.  That should help me gauge how much I can draw on them to share their experience, and let them know that at least I’m not assuming no one else has ever heard of this before.

As Shawn Cornally puts it:

I’m a huge douche when it comes to thinking I know what someone is about to say. I always think I do because the language of teaching is so plural. I need to work on that, I bet people think I’m mean. Or, stated another way: If you think you’re already “doing” every new idea, pedagogy, and assessment strategy, you’re probably not, and you may be douchey, like me.

That Won’t Work In My Classroom

I’ve never given a workshop for teachers before.  But I’ve attended lots of them, some crushingly awful.   (To be fair, presentations in general are often crushingly awful).  I fear this:

Some majority percentage of them was watching and waiting only for one moment. They were waiting for the one phrase or condition or fragment that would allow them to write the whole idea off. They wanted the excuse to say, “That wouldn’t fly in my class.”

(credit: Dan Meyer)

I suspect that the likely source of that sentiment is something like “the students don’t know enough to do that yet.”  I’m trying to address that by showing explicitly the decision-making process of what feedback I can reasonably expect my students to give, and what I can’t.  I’m focussing on the idea that feedback doesn’t have to be about correctness.  If it is about correctness, it doesn’t have to be about completeness. Peer assessment can take some of the routine feedback off of teachers’ hands and put it in students’ hands.  That leaves more teacher time for the things that students truly can’t do (yet).

Shawn again:

Teachers want to be validated as professional educators and content knowledge specialists. This need comes out during discussions and can often be very repetitive.

I hope that distinguishing between feedback students can give and feedback teachers are needed for can alleviate this a bit.

I’m also taking pointers from Dan on this one: rehearsal, jokes about whiskey, frequent nods to all subject areas,working through examples of how to use peer assessment with both writing tasks and performance tasks.  That leaves me drawing a blank about how to deal with this:

Even two years into teaching… I was so comfortable, cocky, and sure of my methods I would find any way to dismiss a good suggestion.

(Dan again.)

The irony is not lost on my that I’m two years into teaching (at this school) and cocky enough to get up and pretend to tell someone else how to teach.

No Through-Line

The points seem disconnected.  They’re about peer assessment but I get the feeling they don’t hang together.

Too Much Stuff

This is probably too much for a 1hr presentation.  I could let participants choose which of the two feedback methods they wanted to experiment with to gain some time.

Got any other suggestions?  Fire away!