As the year winds down, I’m starting to pull out some specific ideas that I want to work on over the summer/next year.  The one that presses on me the most is “readiness.”  In other words,

  • What is absolutely non-negotiable that my students should be able to do or understand when they graduate?
  • How to I make sure they get the greatest opportunity to learn those things?
  • How do I make sure no one graduates without those things?  And most frustratingly,
  • How do I reconcile the student-directedness of inquiry learning with the requirements of my diploma?

Some people might disagree that some of these points are worth worrying about.  If you don’t teach in a trade school, these questions may be irrelevant or downright harmful.  K-12 education should not be a trade school.  Universities do not necessarily need to be trade schools (although arguably, the professional schools like medicine and law really are, and ought to be).  However, I DO teach in a trade school, so these are the questions that matter to me.

Training that intends to help you get a job is only once kind of learning, but it is a valid and important kind of learning for those who choose it.  It requires as much rigour and critical thinking as anything else, which becomes clear when we consider the faith we place in the electronics technicians who service elevators and aircraft. If my students are inadequately prepared in their basic skills, they (or someone else, or many other people) may be injured or die. Therefore, I will have no truck with the intellectual gentrification that thinks “vocational” is a dirty word. Whether students are prepared for their jobs is a question of the highest importance to me.

In that light, my questions about job-readiness have reached the point of obsession.  Being a technician is to inquire.  It is to search, to question, to notice inconsistencies, to distinguish between conditions that can and cannot possibly be the cause of particular faults.  However, teaching my students to inquire means they must inquire.  I can’t force it to happen at a particular speed (although I can cut it short, or offer fewer opportunities, etc.).  At the same time, I have given my word that if they give me two years of their time, they will have skills X, Y, and Z that are required to be ready for their jobs.  I haven’t found the balance yet.

I’ll probably write more about this as I try to figure it out.  In the meantime, Grant Wiggins is writing about a recent study that found a dramatic difference between high-school teachers’ assessment of students’ college readiness, and college profs’ assessment of the same thing.  Wiggins directs an interesting challenge to teachers: accurately assess whether students are ready for what’s next, by calibrating our judgement against the judgement of “whatever’s next.”  In other words, high school teachers should be able to predict what fraction of their students are adequately prepared for college, and that number should agree reasonably well with the number given by college profs who are asked the same question.  In my case, I should be able to predict how well prepared my students are for their jobs, and my assessment should match reasonably the judgement of their first employer.

In many ways I’m lucky: we have a Program Advisory Group made up of employer representatives who meet to let us know what they need. My colleagues and I have all worked between 15 and 25 years in our field. I send all my students on 5-week unpaid work terms.  During and after the work terms, I meet with the student and the employer, and get a chance to calibrate my judgement.  There’s no question that this is a coarse metric; the reviews are influenced by how well the student is suited to the culture of a particular employer, and their level of readiness in the telecom field might be much higher than if they worked on motor controls.  Sometimes employers’ expectations are unreasonably high (like expecting electronics techs to also be mechanics).  There are some things employers may or may not expect that I am adamant about (for example, that students have the confidence and skill to respond to sexist or racist comments).    But overall, it’s a really useful experience.

Still, I continue to wonder about the accuracy of my judgement.  I also wonder about how to open this conversation with my colleagues.  It seems like something it would be useful to work on together.  Or would it?  The comments on Wiggins’ post are almost as interesting as the post itself.

It seems relevant that most commenters are responding to the problem of students’ preparedness for college, while Wiggins is writing about a separate problem: teachers’ unfounded level of confidence about students’ preparedness for college.

The question isn’t, “why aren’t students prepared for college.”  It’s also not “are college profs’ expectations reasonable.”  It’s “why are we so mistaken about what college instructors expect?

My students, too, often miss this kind of subtle distinction.  It seems that our students aren’t the only ones who suffer from difficulty with close reading (especially when stressed and overwhelmed).

Wiggins calls on teachers to be more accurate in our assessment, and to calibrate our assessment of college-readiness against actual college requirements. I think these are fair expectations.  Unfortunately, assessment of students’ college-readiness (or job-readiness) is at least partly an assessment of ourselves and our teaching.

A similar problem is reported about college instructors.  The study was conducted by the Foundation for Critical Thinking with both education faculty and subject-matter faculty who instruct teacher candidates. They write that many profs are certain that their students are leaving with critical thinking skills, but that most of those same profs could not clearly explain what they meant by critical thinking, or give concrete examples of how they taught it.

Self-assessment is surprisingly intractable; it can be uncomfortable and can elicit self-doubt and anxiety.  My students, when I expect them to assess their work against specific criteria, exhibit all the same anger, defensiveness, and desire to change the subject as seen in the comments.  Most of them literally can’t do it at first.  It takes several drafts and lots of trust that they will not be “punished” for admitting to imperfection.  Carol Dweck’s work on “growth mindset” comes to mind here… is our collective fear of admitting that we have room to grow a consequence of “fixed mindset”?  If so, what is contributing to it? In that light, the punitive aspects of NCLB (in the US) or similar systemic teacher blaming, isolation, and lack of integrated professional development may in fact be contributing to the mis-assessment reported in the study, simply by creating lots of fear and few “sandboxes” of opportunity for development and low-risk failure.  As for the question of whether education schools are providing enough access to those experiences, it’s worth taking a look at David Labaree’s “The Trouble with Ed School.”

One way to increase our resilience during self-assessment is to do it with the support of a trusted community — something many teachers don’t have.  For those of us who don’t, let’s brainstorm about how we can get it, or what else might help.  Inaccurate self-assessment is understandable but not something we can afford to give up trying to improve.

I’m interested in commenter I Hodge’s point about the survey questions.  The reading comprehension question allowed teachers to respond that “about half,” “more than half,” or “all, or nearly all” of their students had an adequate level of reading comprehension.  In contrast, the college-readiness question seems to have required a teacher to select whether their students were “well,” “very well,” “poorly,” or “very poorly” prepared.  This question has no reasonable answer, even if teachers are only considering the fraction of students who actually do make it to college.  I wonder why they posed those two questions so differently?

Last but not least, I was surprised that some people blamed college admissions departments for the admission of underprepared students.  Maybe it’s different in the US, but my experience here in Canada is that admission is based on having graduated high school, or having gotten a particular score in certain high school courses.  Whether under-prepared students got those scores because teachers under-estimated the level of preparation needed for college, or because of rigid standards or standardized tests or other systemic problems, I don’t see how colleges can fix, other than by administering an entrance test.  Maybe that’s more common than I know, but neither the school at which I teach nor the well-reputed university that I (briefly) attended had one.  Maybe using a high school diploma as the entrance exam for college/university puts conflicting requirements on the K-12 system?  I really don’t know the answer to this.

Wiggins recommends regularly bringing together high-school and college faculty to discuss these issues.  I know I’d be all for it.  There is surely some skill-sharing that could go back and forth, as well as discussions of what would help students succeed in college.  Are we ready for this?