You are currently browsing the category archive for the ‘Self-Assessment’ category.
Siobhan Curious inspired me to organize my thoughts so far about meta-cognition with her post “What Do Students Need to Learn About Learning.” Anyone want to suggest alternatives, additions, or improvements?
One thing I’ve tried is to allow students to extend their due dates at will — for any reason or no reason. The only condition is that they notify me before the end of the business day *before* the due date. This removes the motivation to inflate or fabricate reasons — since they don’t need one. It also promotes time management in two ways: one, it means students have to think one day ahead about what’s due. If they start an assignment the night before it’s due and realize they can’t finish it for some reason, the extension is not available; so they get into the habit of starting things at least two days before the due date. It’s a small improvement, but I figure it’s the logical first baby step!
The other way it promotes time management is that every student’s due dates end up being different, so they have to start keeping their own calendar — they can’t just ask a classmate, since everyone’s got custom due dates. I can nag about the usefulness of using a calendar until the cows come home, but this provides a concrete motivation to do it. This year I realized that my students, most of them of the generation that people complain is “always on their phones”, don’t know how to use their calendar app. I’m thinking of incorporating this next semester — especially showing them how to keep separate “school” and “personal” calendars so they can be displayed together or individually, and also why it’s useful to track both the dates work is due, in addition to the block of time when they actually plan to work on it.
Relating Ideas To Promote Retention
My best attempt at this has been to require it on tests and assignments: “give one example of an idea we’ve learned previously that supports this one,” or “give two examples of evidence from the data sheet that support your answer.” I accept almost any answers here, unless they’re completely unrelated to the topic, and the students’ choices help me understand how they’re thinking.
Organizing Their Notes
Two things I’ve tried are handing out dividers at the beginning of the semester, one per topic… and creating activities that require students to use data from previous weeks or months. I try to start this immediately at the beginning of the semester, so they get in the habit of keeping things in their binders, instead of tossing them in the bottom of a locker or backpack. The latter seems to work better than the former… although I’d like to be more intentional about helping them “file” assignments and tests in the right section of their binders when they get passed back. This also (I hope) helps them develop methodical ways of searching through their notes for information, which I think many students are unfamiliar with because they are so used to being able to press CTRL -F. Open-notes tests also help motivate this.
I also explicitly teach how and when to use the textbook’s table of contents vs index, and give assignments where they have to look up information in the text (or find a practise problem on a given topic), which is surprisingly hard for my first year college students!
Dealing With Failure
Interestingly, I have students who have so little experience with it that they’re not skilled in dealing with it, and others who have experienced failure so consistently that they seem to have given up even trying to deal with it. It’s hard to help both groups at the same time. I’m experimenting with two main activities here: the Marshmallow Challenge and How Your Brain Learns and Remembers (based on ideas similar to Carol Dweck’s “growth mindset”).
Absolute Vs Analytical Ways of Knowing
I use the Foundation for Critical Thinking’s “Miniature Guide To Critical Thinking.” It’s short, I can afford to buy a class set, and it’s surprisingly useful. I introduce the pieces one at a time, as they become relevant. See p. 18 for the idea of “multi-system thinking”; it’s their way of pointing out that the distinction between “opinions” and “facts” doesn’t go far enough, because most substantive questions require us to go beyond right and wrong answers into making a well-reasoned judgment call about better and worse answers — which is different from an entirely subjective and personal opinion about preference. I also appreciate their idea that “critical thinking” means “using criteria”, not just “criticizing.” And when class discussions get heated or confrontational, nothing helps me keep myself and my students focused better than their “intellectual traits” (p. 16 of the little booklet, or also available online here) (my struggles, failures, and successes are somewhat documented Evaluating Thinking).
What the Mind Does While Reading
This is one of my major obsessions. So far the most useful resources I have found are books by Chris Tovani, especially Do I Really Have to Teach Reading? and I Read It But I Don’t Get It. Tovani is a teacher educator who describes herself as having been functionally illiterate for most of her school years. Both books are full of concrete lesson ideas and handouts that can be photocopied. I created some handouts that are available for others to download based on her exercises — such as the Pencil Test and the “Think-Aloud.”
Ideas About Ideas
While attempting these things, I’ve gradually learned that many of the concepts and vocabulary items about evaluating ideas are foreign to my students. Many students don’t know words like “inference”, “definition”, “contradiction” (yes, I’m serious), or my favourite, “begging the question.” So I’ve tried to weave these into everything we do, especially by using another Tovani-inspired technique — the “Comprehension Constructor.” The blank handout is below, for anyone who’d like to borrow it or improve it.
To see some examples of the kinds of things students write when they do it, click through:
I’m thinking about how to make assessments even lower stakes, especially quizzes. Currently, any quiz can be re-attempted at any point in the semester, with no penalty in marks. For a student who’s doing it for the second time, I require them to correct their quiz (if it was a quiz) and complete two practise problems, in order to apply for reassessment. (FYI, it can also be submitted in any alternate format that demonstrates mastery, in lieu of a quiz, but students rarely choose that option).
The upside of requiring practise problems is eliminating the brute-force approach where students just keep randomly trying quizzes thinking they will eventually show mastery (this doesn’t work, but it wastes a lot of time). It also introduces some self-assessment into the process. We practise how to write good-quality feedback, including trying to figure out what caused them to make the mistake.
The downside is that the workload in our program is really unreasonable (dear employers of electronics technicians, if you are reading this, most hard-working beginners cannot go from zero to meeting your standards in two years. Please contact me to discuss). So, students are really upset about having to do two practise problems. I try to sell it as “customized homework” — since I no longer assign homework practise problems, they are effectively exempting themselves from any part of the “homework” in areas where they have already demonstrated proficiency. The students don’t buy it though. They put huge pressure on themselves to get things right the first time, so they won’t have to do any practise. That, of course, sours our classroom culture and makes it harder for them to think well.
I’m considering a couple of options. One is, when they write a quiz, to ask them whether they are submitting it to be evaluated or just for feedback. Again, it promotes self-assessment: am I ready? Am I confident? Is this what mastery looks and feels like?
If they’re submitting for feedback, I won’t enter it into the gradebook, and they don’t have to submit practise problems when they try it next (but if they didn’t succeed that time, it would be back to practising).
Another option is simply to chuck the practise problem requirement. I could ask for a corrected quiz and good quality diagnostic feedback (written by themselves to themselves) instead. It would be a shame, the practise really does benefit them, but I’m wondering if it’s worth it.
All suggestions welcome!
As the year winds down, I’m starting to pull out some specific ideas that I want to work on over the summer/next year. The one that presses on me the most is “readiness.” In other words,
- What is absolutely non-negotiable that my students should be able to do or understand when they graduate?
- How to I make sure they get the greatest opportunity to learn those things?
- How do I make sure no one graduates without those things? And most frustratingly,
- How do I reconcile the student-directedness of inquiry learning with the requirements of my diploma?
Some people might disagree that some of these points are worth worrying about. If you don’t teach in a trade school, these questions may be irrelevant or downright harmful. K-12 education should not be a trade school. Universities do not necessarily need to be trade schools (although arguably, the professional schools like medicine and law really are, and ought to be). However, I DO teach in a trade school, so these are the questions that matter to me.
Training that intends to help you get a job is only once kind of learning, but it is a valid and important kind of learning for those who choose it. It requires as much rigour and critical thinking as anything else, which becomes clear when we consider the faith we place in the electronics technicians who service elevators and aircraft. If my students are inadequately prepared in their basic skills, they (or someone else, or many other people) may be injured or die. Therefore, I will have no truck with the intellectual gentrification that thinks “vocational” is a dirty word. Whether students are prepared for their jobs is a question of the highest importance to me.
In that light, my questions about job-readiness have reached the point of obsession. Being a technician is to inquire. It is to search, to question, to notice inconsistencies, to distinguish between conditions that can and cannot possibly be the cause of particular faults. However, teaching my students to inquire means they must inquire. I can’t force it to happen at a particular speed (although I can cut it short, or offer fewer opportunities, etc.). At the same time, I have given my word that if they give me two years of their time, they will have skills X, Y, and Z that are required to be ready for their jobs. I haven’t found the balance yet.
I’ll probably write more about this as I try to figure it out. In the meantime, Grant Wiggins is writing about a recent study that found a dramatic difference between high-school teachers’ assessment of students’ college readiness, and college profs’ assessment of the same thing. Wiggins directs an interesting challenge to teachers: accurately assess whether students are ready for what’s next, by calibrating our judgement against the judgement of “whatever’s next.” In other words, high school teachers should be able to predict what fraction of their students are adequately prepared for college, and that number should agree reasonably well with the number given by college profs who are asked the same question. In my case, I should be able to predict how well prepared my students are for their jobs, and my assessment should match reasonably the judgement of their first employer.
In many ways I’m lucky: we have a Program Advisory Group made up of employer representatives who meet to let us know what they need. My colleagues and I have all worked between 15 and 25 years in our field. I send all my students on 5-week unpaid work terms. During and after the work terms, I meet with the student and the employer, and get a chance to calibrate my judgement. There’s no question that this is a coarse metric; the reviews are influenced by how well the student is suited to the culture of a particular employer, and their level of readiness in the telecom field might be much higher than if they worked on motor controls. Sometimes employers’ expectations are unreasonably high (like expecting electronics techs to also be mechanics). There are some things employers may or may not expect that I am adamant about (for example, that students have the confidence and skill to respond to sexist or racist comments). But overall, it’s a really useful experience.
Still, I continue to wonder about the accuracy of my judgement. I also wonder about how to open this conversation with my colleagues. It seems like something it would be useful to work on together. Or would it? The comments on Wiggins’ post are almost as interesting as the post itself.
It seems relevant that most commenters are responding to the problem of students’ preparedness for college, while Wiggins is writing about a separate problem: teachers’ unfounded level of confidence about students’ preparedness for college.
The question isn’t, “why aren’t students prepared for college.” It’s also not “are college profs’ expectations reasonable.” It’s “why are we so mistaken about what college instructors expect?”
My students, too, often miss this kind of subtle distinction. It seems that our students aren’t the only ones who suffer from difficulty with close reading (especially when stressed and overwhelmed).
Wiggins calls on teachers to be more accurate in our assessment, and to calibrate our assessment of college-readiness against actual college requirements. I think these are fair expectations. Unfortunately, assessment of students’ college-readiness (or job-readiness) is at least partly an assessment of ourselves and our teaching.
A similar problem is reported about college instructors. The study was conducted by the Foundation for Critical Thinking with both education faculty and subject-matter faculty who instruct teacher candidates. They write that many profs are certain that their students are leaving with critical thinking skills, but that most of those same profs could not clearly explain what they meant by critical thinking, or give concrete examples of how they taught it.
Self-assessment is surprisingly intractable; it can be uncomfortable and can elicit self-doubt and anxiety. My students, when I expect them to assess their work against specific criteria, exhibit all the same anger, defensiveness, and desire to change the subject as seen in the comments. Most of them literally can’t do it at first. It takes several drafts and lots of trust that they will not be “punished” for admitting to imperfection. Carol Dweck’s work on “growth mindset” comes to mind here… is our collective fear of admitting that we have room to grow a consequence of “fixed mindset”? If so, what is contributing to it? In that light, the punitive aspects of NCLB (in the US) or similar systemic teacher blaming, isolation, and lack of integrated professional development may in fact be contributing to the mis-assessment reported in the study, simply by creating lots of fear and few “sandboxes” of opportunity for development and low-risk failure. As for the question of whether education schools are providing enough access to those experiences, it’s worth taking a look at David Labaree’s “The Trouble with Ed School.”
One way to increase our resilience during self-assessment is to do it with the support of a trusted community — something many teachers don’t have. For those of us who don’t, let’s brainstorm about how we can get it, or what else might help. Inaccurate self-assessment is understandable but not something we can afford to give up trying to improve.
I’m interested in commenter I Hodge’s point about the survey questions. The reading comprehension question allowed teachers to respond that “about half,” “more than half,” or “all, or nearly all” of their students had an adequate level of reading comprehension. In contrast, the college-readiness question seems to have required a teacher to select whether their students were “well,” “very well,” “poorly,” or “very poorly” prepared. This question has no reasonable answer, even if teachers are only considering the fraction of students who actually do make it to college. I wonder why they posed those two questions so differently?
Last but not least, I was surprised that some people blamed college admissions departments for the admission of underprepared students. Maybe it’s different in the US, but my experience here in Canada is that admission is based on having graduated high school, or having gotten a particular score in certain high school courses. Whether under-prepared students got those scores because teachers under-estimated the level of preparation needed for college, or because of rigid standards or standardized tests or other systemic problems, I don’t see how colleges can fix, other than by administering an entrance test. Maybe that’s more common than I know, but neither the school at which I teach nor the well-reputed university that I (briefly) attended had one. Maybe using a high school diploma as the entrance exam for college/university puts conflicting requirements on the K-12 system? I really don’t know the answer to this.
Wiggins recommends regularly bringing together high-school and college faculty to discuss these issues. I know I’d be all for it. There is surely some skill-sharing that could go back and forth, as well as discussions of what would help students succeed in college. Are we ready for this?
Michael Pershan kicked my butt recently with a post about why teachers tend to plateau in skill after their third year, connecting it to Cal Newport’s ideas such as “hard practice” (and, I would argue, “deep work“).
Michael distinguishes between practice and hard practice, and wonders whether blogging belongs on his priority list:
“Hard practice makes you better quickly. Practice lets you, essentially, plateau. …Put it like this: do you feel like you’re a 1st year teacher when you blog? Does your brain hurt? Do you feel as if you’re lost, unsure how to proceed, confused?If not, you’re not engaged in hard practice.”
Ooof. On one hand, it made me face my desire to avoid hard practice; I feel like I’ve spent the last 8 months trying to decrease how much I feel like that. I’ve tried to create classroom procedures that are more reuseable and systematic, especially for labs, whiteboarding sessions, class discussions, and model presentations.
It’s a good idea to periodically take a hard look at that avoidance, and decide whether I’m happy with where I stand. In this case, I am. I don’t think the goal is to “feel like a first year teacher” 100% of the time; it’s not sustainable and not generative. But it reminds me that I want to know which activities make me feel like that, and consciously choose some to seek out.
Michael makes this promise to himself:
It’s time to redouble my efforts. I’m half way through my third year, and this would be a great time for me to ease into a comfortable routine of expanding my repertoire without improving my skills.
I’m going to commit to finding things that are intellectually taxing that are central to my teaching.
It made me think about what my promises are to myself.
Be a Beginner
Do something every summer that I don’t know anything about and document the process. Pay special attention to how I treat others when I am insecure, what I say to myself about my skills and abilities, and what exactly I do to fight back against the fixed-mindset that threatens to overwhelm me. Use this to develop some insight into what exactly I am asking from my students, and to expand the techniques I can share with them for dealing with it.
Last summer I floored my downstairs. The summer before that I learned to swim — you know, with an actual recognizable stroke. In both cases, I am proud of what I accomplished. In the process, I was amazed to notice how much concentration it took not to be a jerk to myself and others.
Learn More About Causal Thinking
I find myself being really sad about the ways my students think about causality. On one hand, I think my recent dissections of the topic are a prime example of “misconceptions listening” — looking for the deficit. I’m pretty sure my students have knowledge and intuition about cause that I can’t see, because I’m so focused on noticing what’s going wrong. In other words, my way of noticing students’ misconceptions is itself a misconception. I’d rather be listening to their ideas fully, doing a better job of figuring out what’s generative in their thinking.
What to do about this? If I believe that my students need to engage with their misconceptions and work through them, then that’s probably what I need too. There’s no point in my students squashing their misconceptions in favour of “right answers”; similarly, there’s no point in me squashing my sadness and replacing it with some half-hearted “correct pedagogy.”
Maybe I’m supposed to be whole-heartedly happy to “meet my students where they are,” but if I said I was, I’d be lying. (That phrase has been used so often to dismiss my anger at the educational malpractice my students have endured that I can’t even hear it without bristling). I need to midwife myself through this narrow way of thinking by engaging with it. Like my students, I expect to hold myself accountable to my observations, to good-quality reasoning, to the ontology of learning and thinking, and to whatever data and peer feedback I can get my hands on.
My students’ struggle with causality is the puzzle from which my desire for explanation emerged; it is the source of the perplexity that makes me unwilling to give up. I hope that pursuing it honestly will help me think better about what it’s like when I ask my students to do the same.
Interact with New Teachers
Talking with beginning teachers is better than almost anything else I’ve tried for forcing me to get honest about what I think and what I do. There’s a new teacher in our program, and talking things through with him has been a big help in crystallizing my thoughts (mutually useful, I think). I will continue doing this and documenting it. I also put on a seminar on peer assessment for first-year teachers last summer; it was one of the more challenging lesson plans I’ve ever written. If I have another chance to do this, I will.
Work for Systemic Change
I’m not interested in strictly personal solutions to systemic problems. I won’t have fun, or meet my potential as a teacher, if I limit myself to improving me. I want to help my institution and my community improve, and that means creating conditions and communities that foster change in collective ways. For two years, I tried to do a bit of this via my campus PD committee; for various reasons, that avenue turned out not to lead in the directions I’m interested in going. I’ve had more success pressing for awareness and implementation of the Workplace Violence Prevention regulations that are part of my local jurisdiction’s Occupational Health and Safety Act.
I’m not sure what the next project will be, but I attended an interesting seminar a few months ago about our organization’s plans for change. I was intrigued by the conversations happening about improving our internal communication. I’ve also had some interesting conversations recently with others who want to push past the “corporate diversity” model toward a less ahistorical model of social justice or cultural competence. I’ll continue to explore those to find out which ones have some potential for constructive change.
Design for Breaks
I can’t do this all the time or I won’t stay in the classroom. I know that now. As of the beginning of January, I’ve reclaimed my Saturdays. No work on Saturdays. It makes the rest of my week slightly more stressful, but it’s worth it. For the first few weeks, I spent the entire day alternately reading and napping. Knowing that I have that to look forward to reminds me that the stakes aren’t as high as they sometimes seem.
I’m also planning to go on deferred leave for four months starting next January. After that, I’ve made it a priority to find a way to work half-time. The kind of “intellectually taxing” enrichment that I need, in order for teaching to be satisfying, takes more time than is reasonable on top of a full-time job. I’m not willing to permanently sacrifice my ability to do community volunteer work, spend time with my loved ones, and get regular exercise. That’s more of a medium-term goal, but I’m working a few leads already.
Anyone have any suggestions about what I should do with 4 months of unscheduled time starting January 2014?
Sometimes I need to have all the students in my class improve their speed or accuracy in a particular technique. Sometimes I just need everyone to do a few practice problems for an old topic so I can see where I should start. But I don’t have time to make (or find) the questions, and I definitely don’t have time to go through them with a fine-toothed comb.
One approach I use is to have students individually generate and grade their own problems. They turn in the whole, graded, thing and I write back with narrative feedback. I get what I need (formative assessment data) and they get what they need — procedural practice, pointers from me, and some practice with self-assessment.
Note: this only works for problems that can be found in the back of a textbook, complete with answers in the appendix.
Here’s the handout I use.
What I Get Out of It
The most useful thing I get out of this is the “hard” question — the one they are unable to solve. They are not asked to complete it: they are asked to articulate what makes that question difficult or confusing.
- Students choose questions that are easy, medium, and hard for them. This means they must learn to anticipate the difficulty level of a question before attempting it.
- If they get a question wrong, they must either troubleshoot it or solve a different one.
- They turn in their questions clearly marked right or wrong.
- I don’t have to grade it — just read it and make comments
- The students get to practice looking at things they don’t fully understand and articulating a question about it
- I get to find out what they know and what they (think they) don’t know.
- Students can work together by sharing their strategies, but not by sharing their numbers, since everyone ends up choosing different problems.
- It makes my expectations explicit about how they should do practice questions in general: with the book closed, page number and question number clearly marked, with the schematics copied onto the paper (“Even if there’s no schematic in the book?!” they ask incredulously — clearly the point of writing down the question is just to learn to be a good scribe, not to improve future search times), etc.
I give this assignment during class, or at least get it started during class, to reduce copying. Once students have chosen and started their questions, they’re unlikely to want to change them.
This was the first round of student feedback this semester. I handed out the “Teacher Skill Sheet” again, but with questions on the front instead of the usual bar graphs. I wrote it in the style of the applications for reassessment that they have been submitting to me.
“This semester, I am reassessing my ability to teach effectively; the criteria are shown on the back. My evidence in support of this includes creating learning activities like research, measurement, and making judgements that are similar to the job; making reassessments available so students can improve at their own pace; and recording how students are progressing, not just what they’ve finished.
What aspects of this course help you think, measure, research, and act more like an electronics tech?
What aspects of this course make it harder for you to think, measure, research, and act more like an electronics tech?
Any other comments about the program, the school, or the teacher?”
The responses were very positive — maybe too positive. I think the wording is too personal; it seemed to elicit a lot of reassurance! This makes me wonder about how my students are applying for reassessment. Do they experience the process like a judgment of their humanity? I do see some evidence of students assuring me that they “will never, ever do it again” rather than explaining how their thinking has changed.
There were a lot of thoughtful comments outside of “all the faculty are very good teachers.” (Not that I’m complaining about this — just a little worried about whether my students felt truly comfortable writing other things.) Here are a few examples.
Definitely helps me think more precisely, look for right information, see what I really need or not, measuring with safety, record the result with respect for those who are going to be reading it.
The motivation of being interested in the subject from the beginning.
The right environment and tools to help.
Daily hands-on problem solving and trouble shooting exercises
The Almighty Model Idea
Talking in front of class where you got to know your stuff
Making It Harder
Trying to be on a same page with people that had a fair amount of background knowledge about electronics
Lack of calibration of equipment
I’m glad to see lots of evidence that they “get” why we are spending so much time carefully adding things to the model and presenting info to each other. We’ve had some pretty tough presentation sessions lately so I was expecting a bit of blowing off steam.
Also, the calibration comment reminds me that my students are obsessed with metrology. It’s really interesting. The most consistent topic of questions is “how wrong is the meter.” They come back again and again to the idea of our measurement instruments, what is going on in there exactly, how we can make them more accurate, how much “off” they are, and how crazy it is that even if we bought the most expensive meter in the world we still couldn’t be sure that it was “right.” I can’t help feeling that this is an almost spiritual question — that they are digesting some new ways of thinking about what “truth” is and where it does (and doesn’t) come from.
Frank Noschese just posed some questions about “just trying something” in problem-solving, and why students seem to do it intuitively with video games but experience “problem-solving paralysis” in physics. When I started writing my second long-ish comment I realized I’m preoccupied with this, and decided to post it here.
What if part of the difference is students’ reliance on brute force approaches?
In a game, which is a human-designed environment, there are a finite number of possible moves. And if you think of typical gameplay mechanics, that number is often 3-4. Run left, run right, jump. Run right, jump, shoot. Even if there are 10, they’re finite and predictable: if you run from here and jump from exactly this point, you will always end up at exactly that point. They’re also largely repetitive from game to game. No matter how weird the situation in which you find yourself, you know the solution is some permutation of run, jump, shoot. If you keep trying you will eventually exhaust all the approaches. It is possible to explore every point on the game field and try every move at every point — the brute force approach (whether this is necessary or even desirable is immaterial to my point).
In nature, being as it is a non-human-designed environment, there is an arbitrarily large number of possible moves. If students surmise that “just trying things until something works” could take years and still might not exhaust all the approaches, well, they’re right. In fact, this is an insight into science that we probably don’t give them enough credit for.
Now, realistically, they also know that their teacher is not demanding something impossible. But being asked to choose from among infinite options, and not knowing how long you’re going to be expected to keep doing that, must make you feel pretty powerless. I suspect that some students experience a physics experiment as an infinite playing field with infinite moves, of which every point must be explored. Concluding that that’s pointless or impossible is, frankly, valid. The problem here isn’t that they’re not applying their game-playing strategies to science; the problem is that they are. Other conclusions that would follow:
- If there are infinite equally likely options, then whether you “win” depends on luck. There is no point trying to get better at this since it is uncontrollable.
- People who regularly win at an uncontrollable game must have some kind of magic power (“smartness”) that is not available to others.
And yet, those of us on the other side of the lesson plan do walk into those kinds of situations. We find them fun and challenging. When I think about why I do, it’s because I’m sure of two things:
- any failure at all will generate more information than I have
- any new information will allow me to make better quality inferences about what to do next
I don’t experience the game space as an infinite playing field of which each point must be explored. I experience it as an infinite playing field where it’s (almost) always possible to play “warmer-colder.” I mine my failures for information about whether I’m getting closer to or farther away from the solution. I’m comfortable with the idea that I will spend my time getting less wrong. Since all failures contain this information, the process of attempting an experiment generally allows me to constrain it down to a manageable level.
My willingness to engage with these types of problems depends on a skill (extracting constraint info from failures), a belief (it is almost always possible to do this), and an attitude (“less wrong” is an honourable process that is worth being proud of, not an indictment of my intelligence) that I think my students don’t have.
Richard Louv makes a related point in Last Child in the Woods: Saving Our Children From Nature-Deficit Disorder (my review and some quotes here). He suggests that there are specific advantages to unstructured outdoor play that are not available otherwise — distinct from the advantages that are available from design-y play structures or in highly-interpreted walks on groomed trails. Unstructured play brings us face to face with infinite possibility. Maybe it builds some comfort and helps us develop mental and emotional strategies for not being immobilized by it?
I’m not sure how to check, and if I could, I’m not sure I’d know what to do about it. I guess I’ll just try something, figure out a way to tell if it made things better or worse, then use that information to improve…
Well, it would have, if I’d been a little quicker on the uptake.
Recap: In order to make sense of their course on High-Reliability Soldering, my students must read and interpret text that is both dry and fastidiously precise. They are able to read everyday text, and can skim for main ideas, but have no practise with “analytical reading” (which I mistakenly called “inspectional reading” — apologies to Mortimer Adler), and no strategy for deciding which reading techniques they should use and when.
In my last post, I wrote about helping students learn a large amount of vocabulary quickly. I asked students to choose 1 or 2 techniques that would help them understand and remember the meaning of each vocabulary item. The choices I offered were:
- Rank by importance
- Draw a diagram
- Give an example/list other names
- Identify confusion
- Ask a question
I adapted these techniques from Cris Tovani’s excellent book Do I Really Have to Teach Reading. I deviated only slightly from her suggestions: She uses the term “visualize,” which I interpreted as “draw a diagram.” She calls them”strategies” (as do most reading comprehension authors); I don’t think they are strategies, I think they’re tactics. But I don’t think I’ve offended against the spirit of the thing.
The following week, we went beyond vocab to interpretation. As a comprehension constructor (Tovani’s word for a scaffolded exercise in comprehension techniques), I used a template for a process control plan. I scaled back and only asked for techniques #1 — Rank by importance (summarize the inspection criteria), #5 — Ask questions, and #4 — Identify confusion (Decide what kind of evidence you will provide).
Strangely, my class worked at this with intensity. Frankly, I didn’t understand why. Process control plans are a tedious but necessary evil. It started to make more sense when I read them. They were surprisingly good. I had accidentally put the students in a position where gaming the system led to high-quality work: the more concisely they wrote their criteria, the less work it would be to inspect their soldering. Interesting result: the students mercilessly cut through the “shoulds” and the “recommendations” and the “guidelines” and pulled out the sufficient conditions. In situations where there was more than one set of sufficient conditions (more than one way to achieve a Class 3 rating), they chose the shortest set.
I missed an opportunity here, but I’ll be on it next time: this is the place to start talking about necessary and sufficient conditions. Most students are unfamiliar with this terminology, and even with the underlying concepts. This leads them to have trouble distinguishing between a definition and a characteristic which… leads them to not understand the textbook. In my list of critical reading techniques, I may replace “summarize important points” with “define in your own words” (this probably works best with non-textbook sources, where there’s no glossary entry to regurgitate).
Moral of the story: my students’ reading comprehension improved when they knew in advance that they would use the text to create an assessment plan. Their logical reasoning improved when they planned to use it to assess themselves. And this took me by surprise. You’d think I had never heard of assessment for learning.
Follow up: I took another page from Tovani’s book and used student work to demonstrate some ideas I wanted to reinforce. I found quote-worthy examples in every process control plan, and put them on the projector at our next class meeting. The students seemed inordinately pleased to see their names next to example of best practises. Result: some students modified their plans to make use of these best practises. One student rewrote his from scratch.
Points they earned for the first draft: 0
Points they earned for improving their plans: 0
Improvement in reading comprehension and critical thinking: priceless
Could there be anything more boring than regulatory standards for quality assurance? I’m teaching a new-to-me course in the 5-week intersession on the IPC Requirements for Soldered Electrical and Electronic Assemblies. I was mentally prepared to hate it, along with its vendor-supplied Powerpoint® presentations and multiple-choice tests. Luckily, a few things rescued it for me:
- We’re not accredited by the regulating agency, so as long as I meet the outcomes, I don’t have to stick to the supplied curriculum
- I’ve been looking for an opportunity to do some reading comprehension instruction and boy, did I get my wish.
- Talking about regulations can lead to “why” questions about how industrial automation affects the small-town and rural region where we live, ethics, and craftsmanship. In order to have those conversations we needed to understand, use, and critique the lingo.
I started with these assumptions:
- My students are able to decode, respond to, and draw conclusions from the everyday text of adult life (newspaper articles, emails)
- They can generally find the main idea in a textbook passage, but will then complain that they “don’t get it” and “can’t teach myself from books.”
- There are a few people with print-related learning disabilities in the room, and we check in regularly to strategize about that, but so far they don’t seem to be having different difficulties than their peers.
So I decided to call it “technical reading,” mostly so that the students wouldn’t think I was accusing them of being illiterate. Resentment of “book-learning” and the classism that often goes along with it is a sore point for a lot of tradespeople, so it required a bit of care. Here’s what I mean by technical reading. When I’m reading a newspaper article about the recent election, I may not know who won, but I already know what an election is. In a textbook, I am asking students to think about new concepts, in addition to new ways of connecting old concepts. Mortimer Adler’s book How To Read a Book calls this “analytical reading” [corrected — M] and distinguishes its strategies from basic reading.
By accident, I recently learned from my most reluctant readers that this semester’s lab book is “way easier to understand,” even though it’s no different in style than last semester’s. I suspect it’s because I’ve stopped assigning “Lab 31” and started assigning “predict the effects of AC and DC on a transformer. Build a circuit to test your predictions.” Having a purpose apparently made the reading seem both easier and better-written.
I started with vocabulary. I know, yuck. But when I read through the first two quizzes (overview of the main ideas), I counted fifty terms that my students would likely not recognize, or not know their meaning in this context.
So I tried my hand at designing a “reading comprehension constructor.” Cris Tovani writes in Do I Really Have To Teach Reading about how to design these: they are scaffolds for particular comprehension strategies. I decided to show a variety of strategies and ask them to choose 1-2 that they found most helpful. (Click through for a bigger copy)
At our first class meeting, after introducing the basic idea of the course, I handed this out. I explained that each row is a main idea, and that each column is a different strategy for understanding and remembering. I talked through my thinking by filling in the top line, and told the students that they were free to use any one strategy or more than one. Then I asked them to fill in as many as they could, and put their count at the top of the sheet.
The goal here was to put their brains on alert about terms that are important, and to set them up for a win when their count goes up at the end of the day. I arranged the terms in the order that we would encounter them.
Then, we played vocabulary bingo. Students marked their bingo cards when I mentioned one of the technical terms (they’re the same ones from the handout above, but arranged alphabetically). The boxes had to be marked with either a definition or the section of the standard that introduced the term. To win, you had explain the meanings of 5 terms in a row. The prize was a 10 minute break for the class, to be used at the winner’s discretion (because the intersession is compressed, we have a full day of class in two 3h blocks).
I spent about 90 min in the morning introducing these ideas, and another hour in the afternoon (I know, deadly. Lots more to improve for next year… in hindsight, I should have been taking regular breaks for people to update their Key Concepts handout). Lots of me talking, with occasional questions, short whole-group discussions, and videos. We had our regular breaks, and two extra breaks on account of people winning bingo.
At the end of the presentation, with 90 min left in our afternoon block, I asked the group to return to the Key Concepts handout, update the information they had written that morning, and fill in any gaps. This took most people another 45 -60 minutes (there are 50 terms, remember). There are some blank spaces at the end for any terms a student wants to remind themselves of. They handed them in, I read them and wrote back. These became our custom “dictionaries” for the rest of the course.
How I Assessed Their Comprehension
- Written process control plans
Ungraded. Every one of them was useable.
- Individual conversations about their soldering and inspection
Graded. All students have assessed their soldering at least once so far, using their process control plans as a rubric.
- Multiple Choice Tests
88% of scores were 70 or better, across 5 tests
- Debates about interpretations of multiple choice questions
Ungraded, obviously, but fascinating and great source of clues about their comprehension. The conversations we’ve had after quizzes have shown a remarkable degree of finesse. The “why” questions I mentioned at the top came out in spades (environmental legislation, social consequences of industrial automation, economics, international relations, ethics, craftsmanship, etc). Students clearly related these to supporting evidence in the spec.
Did It Work?
Overall, I think it was helpful. I’ve gotten questions about individual terms, but no generalized “I don’t get any of this” frustration. Questions about the meanings of words generally came up in private conversations, and we looked back at their “dictionary” together to find clues or fill in blanks.The group is using this terminology fluently and arguing about subtle interpretive points, which I think is pretty impressive considering how recently they learned it — not to mention the density of the text (see example at the top!)
A few people have asked about implementing the “2-copy quiz,” so I thought I would write a bit about what I’m doing, what’s going well so far, and what I realize in hindsight I should have done differently.
Also, I want to say thanks and welcome to the new readers who’ve joined since that post was “Freshly-Pressed.” I’m delighted that you’ve decided to stay. Don’t hesitate to comment on the older items if you are interested — none of these conversations are finished, by a long shot.
Backstory of the 2-Copy Quiz
I got intrigued by the idea of immediate feedback. It’s easy with after-class make-up quizzes, and I was trying to figure out how to do it with in-class quizzes where a large group of people was likely to finish all at once.
1. I could grade the quizzes and hand them back the next day
Too late — students have already forgotten why they wrote reactance when they should have thought about resistance. Also, since the paper’s already graded, they know whether everything’s right or wrong. It takes the question away.
2. I could collect their work on one piece of paper, and they would still have the sheet of questions while we discuss the answers
Better, but still not what I want. They will have forgotten the details of what they wrote and that’s where the devil is. If I present the correct answers in a “clear, well illustrated way, students believe they are learning but they do not engage … on a deep enough level to realize that what was is presented differs from their prior knowledge.” This is a quote from a video about superficial learning made by Derek Muller, of Veritasium science vlog fame. Derek goes on to say that those misconceptions can be cleared up by “presenting students’ misconceptions alongside the scientific concepts.” It was the alongside part I wanted. It’s not until their thoughts and their actions are suddenly brought into focus at the same time that they realize there is a contradiction.
3. I could collect their papers, run to the staff room, photocopy them, and come back to review the answers.
And while I was gone, they squeezed all the burning curiosity out of their questions among themselves. Which is what they normally do in the hallway.
So the conclusion followed: we needed two copies of the quiz. One for me to grade later, one for them to keep while we reviewed the answers right away. One thing I like about this method is that it doesn’t interrupt the learning. It actually removes an interruption that would normally happen (students having to walk out into the hall to talk about the test). By inviting the conversation into the classroom, I can be a part of it if that’s helpful, or I can organize the students into groups and get out of the way.
Goal: for students to assess the goodness of their answer
We often met this goal. Using class time to discuss “rightness” directs their point-chasing energy toward the good judgement I want them to develop (would this be considered educational judo?). If your students are like mine, they will stop at nothing to find out if they “got the right answer.” Sometimes this makes me tired, what with the assumption that there’s a single right answer, and the other assumption that rightness is all that counts. But then I realized that motivation is motivation, and I could probably teach them to jump through flaming hoops or walk on a bed of nails if I put those things between a student who’s just written a test and the “right answers.”
So I put some self-assessment in the way instead. Their desire to “get the right answer” extends to their self-assessment, of course, but the conversations became more nuanced throughout the term. At first there was a lot of “will you accept this answer” and “will you accept that answer.” I tried to help them make inferences about whether an answer is good enough. I also opened myself up to changing my definition of the right answer if they could substantiate their arguments for an alternate perspective. Hell, alternate perspectives and substantiating their thinking are more important than whatever was on the quiz. Later on in the term, I started hearing things like, “No, I don’t think this answer is good enough, it’s a true statement but it doesn’t answer the question,” or “I think this is too vague to be considered proof of this skill.” They’d rather say it before I say it. Which means I have to be really careful what language I use during this conversation. They will repeat it.
I expect the students to write feedback to themselves on their quiz paper. It can be praise or constructive criticism, but there has to be something for each question. They see the value of this later when they’re studying to reassess, but it’s a hard sell at first, and I realized after a few weeks that my students actually had no idea how to do it. For a while, I collected their worksheets at the end of class to read and write back to them. But I don’t pass back the answer sheets that I correct. If they know that I’m going to give the answer and some feedback, it takes the responsibility off of them to do it for themselves.
What worked well
- It’s easy and cheap. Just print off 2 quiz papers for every student, and have them fill out both.
- It’s flexible. You could have them make two full copies of their work. You could ask them to make a full copy for themselves and an answer copy for the teacher (my tactic at the moment). You could ask them to make an answer copy for the teacher, and some rough notes for themselves so they can remind themselves of their thinking (what my students actually do).
- In keeping with the idea of going with the flow of the learning, I let the class direct the questioning. There’s no reason we have to review the first question first. Often there’s one question that everyone is dying to know the answer to, so we talk about that one.
- I get an instant archive of student work. Good for preparing my lesson plans next year, reconstituting my gradebook when a computer crashes, turning over the course to another instructor, submitting documentation to accrediting agencies, etc. etc.
What didn’t always work well
- It’s time-consuming to have to copy things to another page. For numerical answers, it’s pretty easy to copy the final answer, but then you can’t see their work. For short-answer/essay questions, it’s going to get seriously annoying for students to copy them in full to another page (I make them do it anyway). Multiple-choice is pretty painless, but it’s a pain to feel limited to one kind of question.
- Students don’t always see the value of having their own copy, so they fill out my copy and leave theirs blank. See Backstory #2 above.
- Students don’t always see the value of showing their work, so they fill out two copies with nothing but answers. See Backstory #2 above.
- Students don’t always see the value of assessing their work at all. The teacher is going to decide the final grade, and the teacher might disagree with their self-assessment, so why not just wait and let “the experts” make the judgement call.
- Students don’t always see the value of writing feedback to themselves.
- Students sometimes have no idea how to write feedback to themselves.
I struggled with the attitude of “wait for the teacher to decide if it’s good enough.” I should have made it clearer that improving their ability to evaluate their answers was the point, not a side-effect. I deliberately held off updating my online gradebook, so that they had to depend on themselves to track their skills (just got my student evals back today and my “poor tracking” of their grades is the #1 complaint). It’s said best by Shawn Cornally from Think Thank Thunk: “I am not your grade’s babysitter.” In fact I sometimes wondered if I should stop using the online gradebook altogether. Yes, sometimes I disagree with their self-assessment; that’s why it’s important for them to take part in the group discussion after the quiz. That’s where I discuss what I’m looking for in an answer and help them figure out if they’ve provided it. This is hard on them, and makes them feel insecure, for lots of reasons, and I need to keep thinking about it.
One reason is that writing feedback is something I realized (a bit late) that I had to teach. I did this in a hurry and without the scaffolding it deserved. Kelly O’Shea of Physics! Blog! broke it down for me:
How often do you think they’ve practiced the skill of consciously figuring out what caused them to make a mistake? How often do we just say, “That’s okay, you’ll get it next time.” instead of helping them pick out what went wrong? My guess is that they might not even know how to do it.
- I’m still not sure how to teach them to create feedback for themselves, but it goes to the top of the pile of things to introduce in September next year, not February.
- I’m toying with the idea that the students should keep an online gradebook updated. Then I could check up on their scoring (and leave them some feedback about it), instead of them checking up on my scoring, and being annoyed that it’s not posted yet. Not sure logistically how to do this. (Edit: ActiveGrade is already working on this)
- A portable scanner. For $300 I could solve Didn’t-Work #1, 2, and 3. Just scan their quiz papers as they finish. Makes it extra-easy for me to annotate the electronic copy and maybe make a screencast for a particular student, if warranted. Saves trees, too.
Update, July 29, 2011: If you already own a smartphone, the portable scanner is free, and it’s called CamScanner.