You are currently browsing the category archive for the ‘Reading comprehension’ category.
Siobhan Curious inspired me to organize my thoughts so far about meta-cognition with her post “What Do Students Need to Learn About Learning.” Anyone want to suggest alternatives, additions, or improvements?
One thing I’ve tried is to allow students to extend their due dates at will — for any reason or no reason. The only condition is that they notify me before the end of the business day *before* the due date. This removes the motivation to inflate or fabricate reasons — since they don’t need one. It also promotes time management in two ways: one, it means students have to think one day ahead about what’s due. If they start an assignment the night before it’s due and realize they can’t finish it for some reason, the extension is not available; so they get into the habit of starting things at least two days before the due date. It’s a small improvement, but I figure it’s the logical first baby step!
The other way it promotes time management is that every student’s due dates end up being different, so they have to start keeping their own calendar — they can’t just ask a classmate, since everyone’s got custom due dates. I can nag about the usefulness of using a calendar until the cows come home, but this provides a concrete motivation to do it. This year I realized that my students, most of them of the generation that people complain is “always on their phones”, don’t know how to use their calendar app. I’m thinking of incorporating this next semester — especially showing them how to keep separate “school” and “personal” calendars so they can be displayed together or individually, and also why it’s useful to track both the dates work is due, in addition to the block of time when they actually plan to work on it.
Relating Ideas To Promote Retention
My best attempt at this has been to require it on tests and assignments: “give one example of an idea we’ve learned previously that supports this one,” or “give two examples of evidence from the data sheet that support your answer.” I accept almost any answers here, unless they’re completely unrelated to the topic, and the students’ choices help me understand how they’re thinking.
Organizing Their Notes
Two things I’ve tried are handing out dividers at the beginning of the semester, one per topic… and creating activities that require students to use data from previous weeks or months. I try to start this immediately at the beginning of the semester, so they get in the habit of keeping things in their binders, instead of tossing them in the bottom of a locker or backpack. The latter seems to work better than the former… although I’d like to be more intentional about helping them “file” assignments and tests in the right section of their binders when they get passed back. This also (I hope) helps them develop methodical ways of searching through their notes for information, which I think many students are unfamiliar with because they are so used to being able to press CTRL -F. Open-notes tests also help motivate this.
I also explicitly teach how and when to use the textbook’s table of contents vs index, and give assignments where they have to look up information in the text (or find a practise problem on a given topic), which is surprisingly hard for my first year college students!
Dealing With Failure
Interestingly, I have students who have so little experience with it that they’re not skilled in dealing with it, and others who have experienced failure so consistently that they seem to have given up even trying to deal with it. It’s hard to help both groups at the same time. I’m experimenting with two main activities here: the Marshmallow Challenge and How Your Brain Learns and Remembers (based on ideas similar to Carol Dweck’s “growth mindset”).
Absolute Vs Analytical Ways of Knowing
I use the Foundation for Critical Thinking’s “Miniature Guide To Critical Thinking.” It’s short, I can afford to buy a class set, and it’s surprisingly useful. I introduce the pieces one at a time, as they become relevant. See p. 18 for the idea of “multi-system thinking”; it’s their way of pointing out that the distinction between “opinions” and “facts” doesn’t go far enough, because most substantive questions require us to go beyond right and wrong answers into making a well-reasoned judgment call about better and worse answers — which is different from an entirely subjective and personal opinion about preference. I also appreciate their idea that “critical thinking” means “using criteria”, not just “criticizing.” And when class discussions get heated or confrontational, nothing helps me keep myself and my students focused better than their “intellectual traits” (p. 16 of the little booklet, or also available online here) (my struggles, failures, and successes are somewhat documented Evaluating Thinking).
What the Mind Does While Reading
This is one of my major obsessions. So far the most useful resources I have found are books by Chris Tovani, especially Do I Really Have to Teach Reading? and I Read It But I Don’t Get It. Tovani is a teacher educator who describes herself as having been functionally illiterate for most of her school years. Both books are full of concrete lesson ideas and handouts that can be photocopied. I created some handouts that are available for others to download based on her exercises — such as the Pencil Test and the “Think-Aloud.”
Ideas About Ideas
While attempting these things, I’ve gradually learned that many of the concepts and vocabulary items about evaluating ideas are foreign to my students. Many students don’t know words like “inference”, “definition”, “contradiction” (yes, I’m serious), or my favourite, “begging the question.” So I’ve tried to weave these into everything we do, especially by using another Tovani-inspired technique — the “Comprehension Constructor.” The blank handout is below, for anyone who’d like to borrow it or improve it.
To see some examples of the kinds of things students write when they do it, click through:
How I got my students to read the text before class: have them do their reading during class.
Then, the next day, I can lead a discussion among a group of people who have all tangled with the text.
It’s not transformative educational design, but it’s an improvement, with these advantages:
- It dramatically reduces the amount of time I spend lecturing (a.k.a. reading the students the textbook), so there’s no net gain or loss of class time.
- The students are filling in the standard comprehension constructor that I use for everything — assessing the author’s reasoning on a rubric. That means they know exactly what sense-making I am asking them to engage in, and what the purpose of their reading is.
- When they finish reading, they hand in the assessments to me, I read them, and prepare to answer their questions for next class. That means I’m answering the exact questions they’re wondering about — not the questions they’ve already figured out or haven’t noticed yet.
- Knowing that I will address their questions provides an incentive to actually ask them. It’s not good enough to care what they think if I don’t put it into action in a way that’s actually convincing to my audience.
- Even in a classroom of 20 people, each person gets an individualized pace.
- I am free to walk around answering questions, questioning answers, and supporting those who are struggling.
- We’re using a remarkable technology that allows students to think at their own pace, pause as often/long as they like, rewind and repeat something as many times as they like, and (unlike videos or podcasts) remains intelligible even when skipping forward or going in slow-mo. This amazing technology even detects when your eyes stray from it, and immediately stops sending words to your brain until your attention returns. Its battery life is beyond compare, it boots instantly, weights less than an iPod nano, can be easily annotated (even supports multi-touch), and with the right software, can be converted from visual to auditory mode…
It’s a little bit JITT and a little bit “flipped-classroom” but without the “outside of class” part.
I often give a combination of reading materials: the original textbook source, maybe another tertiary source for comparison — e.g. a Wikipedia excerpt, then my summary and interpretation of the sources, and the inferences that I think follow from the sources. It’s pretty similar to what I would say if I was lecturing. I write the summaries in an informal tone intended to start a conversation. Here’s an example:
And here’s the kind of feedback my students write to me (you’ll see my comments back to them in there too).
Highlights of student feedback:
Noticing connections to earlier learning
When I read about finite bandwidth, it seemed like something I should have already noticed — that amps have a limit to their bandwidth and it’s not infinite
When vout tries to drop, less opposing voltage is fed back to the inverting input, therefore v2 increases and compensates for the decrease in Avol
Noticing confusion or contradiction
What do f2(OL) and Av(OL) stand for?
I’m still not sure what slew-induced distortion is.
I don’t know how to make sense of the f2 = funity/Av(CL). Is f2 the bandwidth?
In [other instructor]’s course, we built an audio monitor, and we used an op amp. We used a somewhat low frequency (1 KHz), and we still got a gain of 22.2 If I use the equation, the bandwidth would be 45Hz? Does this mean I can only go from 955 Hz to 1045 Hz to get a gain of 22.2?
Asking for greater precision
What is the capacitance of the internal capacitor?
Is this a “flipped classroom”?
One point that stuck with me about many “flipped classroom” conversations is designing the process so that student do the low-cognitive-load activities when they’re home or alone (watching videos, listening to podcasts) and the high-cognitive-load activities when they’re in class, surrounded by supportive peers and an experienced instructor.
This seems like a logical argument. The trouble is that reading technical material is a high-cognitive-load activity for most of my students. Listening to technical material is just as high-demand… with the disadvantage that if I speak it, it will be at the wrong pace for probably everyone. The feedback above is a giant improvement over the results I got two years ago, when second year students who read the textbook would claim to be “confused” by “all of it,” or at best would pick out from the text a few bits of trivia while ignoring the most significant ideas.
The conclusion follows: have them read it in class, where I can support them.
This morning, my students are reading about negative feedback and assessing the information provided using our standard rubric, which asks them to summarize and write their questions. They’re finding it difficult to understand, almost too confusing to summarize. I remind them that that’s ok — to summarize what they can, if they can. I also tell them to write questions as they read, not to wait until the end of the passage to write them down.
Especially, I remind them that common cause of “getting stuck” is waiting until they understand the paragraph before writing down a question. The problem, of course, is that you might not be able to understand the passage until after the question is answered. Waiting for understanding before asking questions is like waiting to be fit before going to the gym.
I have this conversation with one student:
Student: “What I’m afraid of is, if I get partway through the paragraph and write a question, then I get later in the paragraph and write down another question, I’ll get to the end and realize, Oh, that’s what it meant, and I won’t need to ask that question any more.”
Me, joking: “So what happens then? What horrible consequence ensues?”
Student: “I have to kill an eraser!”
Me: “No need to erase it. Just write a note that says, ‘oh, now I get that… [whatever you just understood]. Have you ever noticed how often I do that on your quizzes and papers? I write questions as I’m reading, then I cross them out when I get to the end and write a note that says “never mind, I see that you’ve answered the questions down here.”
Student: [noncommittal shrug, smiling, seems willing to try this]
I think that’s an ok way to get the point across. I sit back down. Then I need to be a smart ass. I go back to chat with the same student. “You know, from our conversation earlier, it sounded like you were saying, ‘I’m afraid that if I ask questions, I’ll get it.’ ”
My point, of course, is that asking questions, thinking through our questions, and clarifying to ourselves what question we mean to ask can be an important part of sense-making, and can even help us answer our own questions. But that’s not how it comes across to the student. Now he’s been backed into a corner, shown the absurdity of something he just said. He scrambles to defend his statement. “No, what I meant was that if I ask questions while I’m reading, I might get to the end and not understand my… [pause] I can’t put it into words.”
Notes to self
- Students sometimes think they should delay asking questions until after they have understood something. This causes deadlock and frustration. Strategize about this with students.
- Pointing out someone’s misconception, especially in the middle of class, does not usually result in a graceful acknowledge of “oh, yeah, that doesn’t really make sense, does it?” It usually results in backpedaling and attempts to salvage the idea by re-interpreting, suggesting that I didn’t understand them, or saying “I understand it, I just can’t put it into words.”
- The phrase “I understand it but I just can’t put it into words” is highly correlated with “You just pointed out a misconception to me and now I must save face by avoiding your point at all costs.” Use this clue to improve.
- Dear Mylène, you think you’re too highly evolved to use “elicit-confront-resolve” to address student misconceptions, but you’re mistaken. It’s causing students to avoid their misconceptions instead of facing them. Find a way to do something else.
- kept working after the end of class
- asked significant questions (“What’s the difference between MethodX and MethodY,” “What’s the difference between colours and pixels?”)
- turned in unusually ambitious first programming assignments (50-200 lines of code)
- made sense of abstract concepts (“Isn’t a function just a way to name some lines of code?”)
- reported finding it “surprisingly” enjoyable
Part of this is no doubt attributable to the “media computation” approach that he uses, which seems like a very cool way to introduce people to programming. Unfortunately I am stuck with a 2-line LCD that has timing issues I haven’t figured out completely, so we’ll be starting with blinking lights. But there are definitely aspects that I can incorporate, including having students
- type in examples
- develop line-by-line explanations for themselves
- compare examples that differ in small ways
- compare their explanations to others’
- plan and execute a change to the program
You’ll notice that the “self-explanation” part has two columns: one titled “Explain in English what this means” and one titled “Why does it have to be here?”
When I’ve tutored for programming courses in the past, I’ve noticed that students often write unhelpful comments such as
index = index + 2; // Add 2 to index
For a fluent programmer, these are frustrating. Not only did I waste time reading something I already knew, but now I have to spend time figuring out why that line is there. My working hypothesis is that a beginning programmer needs that translation, just as a beginning student of Italian may need to translate statements into English before they can begin to interpret them. When beginners are forced to comment their code, they write what they think, which is a literal translation. I’m hoping that by making space for that translation (what Tovani calls “holding your thinking”), it will enable students to go to the next step and write something that will continue to be helpful when they stop needing every line to be written in two languages.
I’ve also asked them to document the differences between their code and the code of the person on either side of them. The example programs differ only in tiny ways — maybe a different pattern of LEDs is turned on, or there is a slight difference in the I/O bit mask. I was deliberately keeping it simple for the first run through, as we worked through the hiccups.
The next step is for students to summarize the meaning and function of the new ideas we learned (for example, the difference between a “port” and a “register”). Finally, I ask them to plan a change to the program, tell me what their plan is, and then make it happen (this order of operations is designed to prevent students from making a random change, documenting its effect, and claiming that it was their plan all along).
Since it was our first time trying this, I walked the students through it as if I was a student. I used an example program (shown in the document above) and annotated it on the board. There are a few elements I told them they were not responsible for explaining, and I skipped over them. One was the “void” keyword used as the return type of the main function (I didn’t even want to get into data types, let alone functions, or what the main function would be returning that value to).
I also didn’t explain the infinite while loop in much detail, except to confirm with them that 1 would always equal 1, so the condition would always be true (all the while loop conditions have relational operators such as 5==5, or 6>3, based on my theory that it’s easier for a beginner to see that those are true than to make sense of while(1); or something like that).
We only had time to get about half-way through this exercise in class, and so far the results are promising. I have very little to compare to, since I haven’t taught this particular course before, but I was encouraged that they were attending to detail, looking for patterns, and making inferences about the system’s requirements and possibilities. Some of the questions that came up:
- Does it matter if things are on the same line/how many tabs there are/how many lines between commands?
- Do you always need the semicolons/stars?
- Instead of using while(5==5), could I use while(x==5) and make x equal to something and change it?
- If a pin is set as analog, does that mean it needs an analog input? What if it’s set as a digital input? Does it need digital input?
I’m teaching embedded systems programming for the first time this year. The only other programming course I’ve taught used PLCs and a “ladder-logic” style language. The students and I thought it was going well until most people bombed the mid-term. I’m trying to improve on students’ ability to predict what a program will do in the hopes that it will help them make better design and debugging choices.
I start by issuing every student a PICDem 2 demo board (mostly for historical reasons — evaluating other embedded systems in a project for next summer, so feel free to throw your suggestions in the comments). C is not high my list for students who’ve never programmed before — especially not embedded C. But there you have it.
On the first day, I started by handing out the boards and getting everyone to push all the buttons. I hand out a worksheet that asks students to explore the board.
The worksheet asks students to
- Make a list of everything the demo board can do (to get them thinking about the feature set)
- Find some of the important hardware on both the PCB and on the schematic (to build fluency with switching between the two)
- Keep track of questions that come up while they do that (in a style slightly reminiscent of Cornell notes or Cris Tovani‘s double-entry diary technique for reading comprehension. Is it fair to call this “reading” the board?)
When everyone had had a good time making shrill buzzer noises, I went around the room gathering every feature we could think of, and gathering questions too. If you want to see what my students are curious about, take a look at our mind map (names removed) and click on the “Programmable Systems” section. Highlights:
- How does the micro convert an analog voltage to 0s and 1s? Does it add in tiny increments? There’s gotta be some kind of estimating — it must round.
- What’s the other number shown on the LCD beside the voltage? It goes from 0 when the pot’s turned all the way down, to 1023 when it’s turned all the way to 5V. It’s 511 when the voltage is 2.5V. So that’s got to be 1 byte of something. [Bless the heart of the demo app designers who put the raw A/D count on the display — M.]
- How do you change the frequency of the buzzer? Is it an LC circuit?
- Is it using a timer to control the time between the buzzer voltage transitions?
- What would happen if you changed the duty cycle?
- Is there a counter? Where is it?
- So it doesn’t really know what time it is — it’s just counting oscillator transitions since it was turned on.
I love the way they’re making connections to their course work on digital systems (counters, timers, relationship between frequency and duty cycle, the significance of 1023) and AC circuits (LC oscillators). They’re asking relevant questions, making significant observations, making supported inferences, and getting excited about figuring out “what makes it do that” (which might be my “mantra” for the program). These questions will drive my approach to the next few weeks.
In a previous post, I explained the thought process behind seven of my choices of standards for evaluating thinking. They are mostly unsurprising items like clarity, precision, and logic, unpacked into student-friendly language (I hope). The remaining one is not like the others. When we are evaluating reasoning, I ask my students to find and evaluate the connections to their own experience and intuition.
I don’t do this because I want them to reject ideas that contradict their expectations. I also don’t do it because it’s a warm-and-fuzzy way of making things seem “personal” or because it’s a mnemonic that anchors things in your brain. Finally, I don’t do it (anymore) as a way to elicit and stamp out their ideas. I do it because a bunch of previously disconnected thoughts I’ve had about teaching are converging here. I’m trying to document the convergence.
The more I ask students to evaluate how their experiences and intuitions connect to new ideas, the more I learn: about my teaching, about their thinking, and about when I should ask for experience vs. when I should ask for intuition. Every day, I find a new reason why it’s important to develop the habit of asking this question, and of determining whether our ideas help us accomplish our purpose:
- because my students often can’t tell the difference between their ideas and the author’s ideas. I find this downright alarming. When asked, they will report that an author’s idea was also theirs “all along” (even if they contradicted it yesterday); or, they will report that “the author said” something that is simply not there.
- because an ounce of perplexity is worth a pound of engagement, and our own experience is a great source of perplexity (“but wait, I thought…”)
- because convincing students of some counter-intuitive idea without giving them a chance to connect to its counter-intuitiveness can steal the possibility that it will ever seem cool to them
- because “when things are acting funny, measure the amount of funny.” If you are not comparing new ideas to your intuition, nothing ever seems funny
- because failing to ask this question teaches students that what they learn in school is not connected to the rest of the world
- because as Cris Tovani writes in I Read It But I Don’t Get It, we need to ask “So what?” Making connections with the text simply to get through the assignment, without asking the “So what?” of how it moves us closer to our purpose, can damage our understanding rather than strengthen it (and might be an ingredient in the “mythologizing” that Grace writes about)
- because “when intellectual products attain classic status [and become divorced from our own ideas,] their importance is reflexively accepted, but not fully appreciated…”
- because I’m starting to think that well-reasoned misconceptions help us make progress toward changing the game from one of memorization to one that’s about “learning the genre” of a discipline. I want my students to see “technician” as something they are, not something they do… and I think that that sense of participating in an identity, not just performing its tasks, is a clue to the 50% attrition rate in my program
- because maybe initial knowledge is a barrier to learning that must be corrected or maybe alternative conceptions are hallmarks of progress but either way, students need to talk about them… and I need to know what they are
- because the other day I wished I had an engineering logbook to keep track of the results of my experiments. I haven’t wanted one of those since I left the R&D lab where I used to work. The fact that I have some results worth keeping track of makes me more certain that I’m doing the kind of inquiring (into my students’ learning) that matters.
A technician’s career depends on their troubleshooting skills, but we don’t teach that. Instead, we teach students to build and analyze circuits. We know that they will have “troubles” along the way, and we hope that they will learn to “shoot” them. Or worse — we assume that troubleshooting is a “knack” bestowed by “fortune,” and that our function is to weed out students who don’t have it.
Some students enter with those skills, some don’t, but all of them understandably interpret “building circuits” as the point. That’s what we teach, that’s what we assess, right? This causes weird tensions throughout the program. Students rarely attend to or improve troubleshooting skills deliberately. This is the story of how I’m starting to teach that this year.
Last spring, I starting feeling frustrated by an underlying pattern in my classroom that I couldn’t put my finger on. Eventually, I decided that entailment was part of it. My students were only sometimes clear about which was the cause and which was the effect, often begged the question, missed the meaning of stipulative definitions, and made unsupported assumptions (especially based on text, but sometimes based on the spoken word). We had no shared vocabulary to talk about what it means for a conclusion to “follow” from a set of premises. My students obviously troubleshoot in their daily lives, whether it’s “where are my keys” or “why is the baby crying.” When the car won’t start, they don’t check the tire pressure. Yet when their amp has no bias, they might check the signal source before checking the power supply (this makes no sense).
I was only occasionally successful at tapping into their everyday logic. In a program that ostensibly revolves around troubleshooting, this is a serious problem. I started discussing troubleshooting explicitly in class, modeling techniques like half-split and keeping an engineering log, and informally assessing my students on their use. It wasn’t really helping them think through the ideas — only memorize some useful techniques. I started wondering whether I should teach symbolic logic or Euclidean proofs somehow. I read about mathematical and scientific habits of mind, but there seemed to be an arbitrarily large number of possible candidates and no clear pattern language to help a newcomer decide which one to use when.
I started teaching technical reading, and boiled down the reading tactics to
- Choosing a purpose
- Finding the confusion
- Checking for mental pictures/descriptions
- Using structural clues
- Making connections to what you already know
- Asking questions/make inferences
That helped. We started to have a way to talk about the difference between what an author means and what I think. Because of that, I discovered that my students had no idea what an inference was.
I started reading everything I could about logic and critical thinking. It lead me to a lot of sentimental claptrap. There’s a whole unfortunate genre of books about “harnessing the power of the whole brain” and “thinking outside the box” and other contentless corporate-pep-talk cheerleading. On the rare occasions that these materials contained teaching ideas, they ignored the “critical” part of critical thinking altogether and seemed satisfied by the “creativity” of students thinking anything at all, concluding that we should “celebrate our students’ (employees’) ideas.”
Yeah. I get that already.
One of the things I read was the website for the Foundation for Critical Thinking (FCT), and I confess that I didn’t have much hope. It looked a lot like all the others. I started reading about their taxonomy of thinking and found it simplistic. I let it sit in the back of my brain for the summer. But it kept coming back. The more I read it, the more useful it seemed. It helped me notice and connect other threads of “thinking moves” that I felt were missing in my classes:
- What premises are presupposed by a conclusion?
- What other conclusions follow from those premises?
- Are there other sets of premises from which this conclusion would also follow?
- What is the difference between a characteristic and a definition? Between a necessary and sufficient condition?
- Generalize based on specific examples
- Give a specific example based on a generalization
- Try to resolve discrepancies
- Identify the steps that lead from premise to conclusion
So I read more. Their basic model of critical thinking has 8 elements (Purpose, Questions, Information, Inferences, etc.) and 9 standards against which the elements should be assessed (Clear? Accurate? Logical? Significant? etc). As you can see, there’s a fair amount of overlap with the reading comprehension tactics. The FCT also discuss 7 intellectual traits or attitudes that they consider helpful: intellectual humility, perseverance, autonomy, empathy, integrity, etc.
Then I read an essay called Why Students and Teachers Don’t Reason Well. The authors discuss responses and perspectives of teachers who have taken FCT workshops — see the section called “The Many Ways Teachers Mis-Assess Reasoning.” It shed a lot of light on the above-mentioned sentimental claptrap.
Finally, here is their paper on faculty emphasis on critical thinking in education schools across the US. Interview responses include samples of both weak and strong characteristic profiles. I found it fascinating. Most ed school profs who were interviewed knew that they wanted to emphasize critical thinking in their classes, but couldn’t come up with a clear definition of it… very much the position I was in.
I’m a little wary of putting too much weight on this one model, but it has been very helpful in clarifying what I’m looking for in my students’ thinking (and in my own). I’m not convinced that my definition of critical thinking is exhaustive, but at least I have one now (this is better than the vague feeling of frustration and unease I had before). The expected benefit is better conversations about troubleshooting — inferences about causes, implications of various solutions, the ability to generate better-quality questions, etc. Some unexpected benefits include
What I say to students— it helps me use specific and consistent language when I write to them. I’m focusing on clarity, precision, and relevance in their questions, and clarity and logic in their inferences. Also, an agreed-upon language about high-quality thinking means that I’m training myself to stop writing “This is impressive reasoning” on their papers. Who cares that I’m impressed? Was the purpose to impress me? Or was the purpose to reason well? I’m learning to write “Using a diagram helped clarify the logic of this inference,” and let them decide whether they’re impressed with themselves. It’s not perfect (I’m still doing a lot of the judging) but I think it’s an improvement. As I mentioned, any agreed-upon taxonomy would work. I just haven’t found any others that don’t irritate me.
What they say to themselves — my students are already starting to expect that I will write “can you clarify what you mean by x exactly?” Language to that effect is starting to show up in the feedback they write on self-assessments.
What they say to each other — I’ve started using real-time feedback on group discussions (more soon), and realized that I’m looking for all the same things there (“When you say x, do you mean…” and “If x, does that mean y?”).
What I hear — I’m learning to hear out their current conceptions, regardless of accuracy. Giving them feedback on the quality of their thought takes my focus away from “rightness” (for now anyway). It also helps me appreciate how exquisitely logical their thinking sometimes is, complete with self-aware inferences that clearly proceed from premise to conclusion. I’m embarrassed to say that my knee-jerk desire to “protect” them from their mistakes meant that I often insisted that they mouth true statements for which they had no logical justification — in other words, I beat the logic out of them. Then I complained about it.
What they say to me — students have started asking questions in class that sound like “can you clarify what you mean by x” and “when you say x, does that mean y?” Holy smoke. From a group that’s only been in college for 10 days, I had to hear it to believe it.
The teacher’s skill sheet was a success (thanks, Dan). Today was our third day with the first-year students, and my first time explaining skills-based-grading to an incoming class. Our reassessment period is Thursdays from 2:30 – 4:30, so in this morning’s shop class I dropped a skill sheet on their benches and we started using it. By the time I started explaining how I grade this afternoon, they already had a skill signed off.
I handed out their skills folders and the first two skill sheets for DC circuits. You should have seen their jaws drop when I explained that they can choose if, when, and how often they reassess. They asked great questions and gave thoughtful answers. We talked about how everyone progresses, the many ways of getting extra help, learning at your own pace, and the infinite ways of demonstrating improvement or proficiency. They wanted to know what is proof of improvement (required when applying for reassessment), and had suggestions (quiz corrections, practice problems, written explanations). They wanted to know what level 5 questions are, where to find some, and how to prevent them from getting too big. Many of them had ideas in mind already and we bounced those around to see if they meet the criteria (at least two skills, and you have to choose the problem-solving approach yourself, so it can’t be the same as something we’ve done in class).
We talked about how and why you couldn’t get credit for level 4 until you’ve completed level 3. I explained it in terms of employers’ expectations about basic skills. One student explained it back to me in terms of “levelling up your character” in role-playing games. We talked about feedback, from me and from themselves. I gave examples of feedback that does and does not help you improve (“I need to figure out why V and I are different” compared to “I don’t get it.”). We talked about how many points homework is worth (none). My get-to-know-you survey tells me there are a lot of soccer players in the room, so we talked about practices and push ups. “Do you get points in the league standings for showing up to practice? What about for going to the gym?” I asked. Of course they said no. “So why do it if it’s not worth points?” They got this right away. “It helps you win the game.” “It makes you stronger.”
I enjoyed this conversation:
Student A: “So homework is just for learning.”
Me: “What are you talking about? I thought homework was for sucking up to the teacher.”
Student B: “I thought so too. That’s why I never did it.”
Student C: “I thought homework was for keeping kids in their homes at night.”
Once the questions had died down, I gave them a copy of a skills sheet that looks just like the ones I use to assess them, except that all the skills relate to my teaching. I asked them to sign and date next to any items they had evidence that I had done. I did this so I could find out if they really understood how to use the thing. But it had unexpectedly positive side-effects. From a quick glance, they could tell that I was going to get a “failing” grade. It never occurred to me that they would be upset by this.
They had barely started reading when I started hearing gasps. “You’re failing!” someone called out. “Is our assessment of you going to affect your assessment of us?” someone else half-joked. “Of course I’m not passing yet,” I replied reasonably. “It’s the second day of class. There’s no possible way I could have done 60% of my job by now. That’s how it works: you start at 1, then you move up to 2.” I walked around and peeked over shoulders to make sure they got the mechanics of what to fill in where. I stopped a couple of times to talk to people who seemed to have overly generous assessments. “How have I demonstrated that?” I asked.
We reviewed it together. We got to practice technical reading in tiny, learning-outcome-sized pieces. The highly condensed text on a skill sheet changes meaning if you miss a preposition. Another unexpected side-effect: my students had noticed me doing things that I hadn’t noticed myself. They had evidence to support most of their claims, too. There were a few that I disagreed with because I had only demonstrated part of the skill, and I modelled the kind of feedback that my “teacher” could have given me to help me improve.
Overall, they seemed very concerned about my feelings about “failing;” we calculated my current topic score at 0.5/5 and filled in the bar graph on the front of the skill sheet with today’s date. I got a chance to model a growth mindset. I made sure to let them see how proud I am of having achieved a 0.5 in only two days’ work, and mentioned that this is an improvement over two days ago, when I had a zero. The usual running commentary of tongue-in-cheek jibes had a disarmingly earnest, reassuring tone. “I know that you can improve your score the next time you reassess,” one student said. Another student chimed in with “feel free to drop in to my office anytime if you want to get some feedback.”
I noticed that my students couldn’t use their textbook to help them solve problems. I didn’t know how to teach them to do that, so I set out to find out. I didn’t understand what my students didn’t know, so I asked them. They couldn’t tell me, but their answers helped me ask different questions, which led me to other resources. I reviewed books, videos, blog posts, and research websites. When I boiled down the results and applied them to my classroom, here’s what I got:
- Choose a purpose
- Find the confusion
- Check for mental pictures/descriptions
- Use structural clues
- Make connections to what you already know
- Ask questions/make inferences
Maybe you got the irony already, but it took me two months: after looking for the answers from my students, from blogs, videos, and etc, the list I distilled was a summary of the very process I had used to find the list. You see, those things are what I was doing while reading, watching, listening. It turns out that I do them when I’m having a face-to-face conversation, when I’m experimenting with new equipment, when I’m inspecting a solder joint, and when I’m troubleshooting a circuit. In other words, I do the same things regardless of whether I’m reading, listening, watching, or inquiring. Can I go so far as to say that, to me, even a lecture is an inquiry activity? These aren’t techniques for reading comprehension. They’re techniques for comprehension.
While I was working away on this post, John Burk beat me to it, asking how we can teach students to learn from as many formats as possible. I’m thinking, maybe these techniques can improve our ability to see what we’re looking at, hear what we’re listening to… regardless of the medium.
That got me thinking about how I learn new things when there’s no one around to teach me. Let’s choose a suitably complex goal like, say, learning how to teach (I left a skilled trade to do this, so keep in mind that I don’t have a B.Ed. or any other formal teacher training). I listen to lectures (videos and podcasts). I read text (research papers, books, blogs). Lots of text (more blogs). I write. I practise. I experiment. Sometimes I make things up that I don’t have words for. Sometimes I learn a bunch of new words and try to apply them.
No teacher decides for me whether I should be introduced to a new idea via a screencast or an inquiry activity. And all along the way, I evaluate. Which media worked best for which goal? How do I know? What will I do next time I’m in that situation? If I’m serious about helping my students become independent learners, I eventually have to stop doing this for them.
What’s my role as a teacher in all of this? So far, I’ve come up with these:
- Helping my students use the techniques above and adapt them to their own goals, with any media they have available.
- Removing roadblocks in their use of these techniques.
- Helping them evaluate their ability to apply these techniques to different media.
- Supporting them in creating the media they need.
I’ve got a few ideas about #1, #2, and #3. But I think #4 might be the most important one.
For Using Comprehension Techniques
Jerrid Kruse contributes some great comprehension questions
The West Virginia Dept. of Ed.’s Keys to Comprehension
For Removing Roadblocks
Bret Victor’s suggestion for text that is less “information to be consumed,” more “environment to think in.”
When evaluating which medium to use for any given activity, try asking, how will this medium make it easier to use the techniques above? How will this medium make it harder?
For Why This Matters
From the ASCD’s Educational Leadership magazine, a 2011 review of research that asks, are we making our students Too Dumb for Complex Texts?
For Creating Our Own Media
Until then, I’m thinking a lot about who gets to design media, and who merely fills them with content. Stay tuned.
“I can’t believe that this [workmanship] would be acceptable in a pacemaker. I really see why electronics fail so often.” (student)
The last unit of my High Reliability Soldering course was on surface-mount components. I have tried to gradually release to the students the responsibility for reading, understanding, and making decisions based on the specification document.
I introduced two especially tricky sections of the surface-mount chapter with an exercise designed to help students figure out what was confusing. Then we practiced using the de-confusification techniques shown below.
In case you can’t see the document, the main headings are
- Choose a purpose
- Find the confusion
- Check for mental pictures/descriptions
- Use structural clues
- Make connections to what you already know
- Ask questions/make inferences
This is a simplified version of what’s common to all the books about reading comprehension I looked at (see the Recent Reads page for my notes and reviews). I’ve boiled it down to techniques that I think will be most helpful for reading non-fiction about unfamiliar topics.
In class, we reviewed each technique, and came up with examples from our conversations that demonstrated them. Then I learned this incredibly useful piece of information: 9/10 people in the class did not know what an inference was.
Sometimes I’m really slow.
After a moment of feeling deflated, I realized that this is exciting. My students sometimes get angry and frustrated when I ask them to make predictions. If they don’t have a way to distinguish valid inferences from wild guesses, then predictions probably seem pretty futile. In September, we’ll spend some time learning to judge if an inference is valid. Our reading comprehension techniques are leading us directly toward evaluating the validity of arguments — something I’ve tried to figure out how to weave into my curriculum.
As preparation for the test, I asked them to apply one of these techniques to each confusing idea they had identified, or for any other idea in Chapter 7 that they weren’t sure about (see p. 2 of the handout, above). If they passed them in by the end of the day, I would write back and respond to their questions and comments. The students know I can’t always be trusted to give straight answers to questions, so I thought that the offer of direct answers would be a powerful incentive. Yet only one person turned in some “confusions.” Now, that person was the one who most needed it, but I was still a bit worried.
One way to look at it: this proves that my crazy grading system ensures that most people will not complete independent practice.
Another way: Two days later, everyone in the class passed the test on Chapter 7. And not bare passes either: no marks below 70, most in the 80s and 90s. The only section that I “taught” (in the lesson on identifying confusion) was the intro. Even when students did in-class exercises reading sections 7.1.1 and 7.1.2, I never did explain what they meant. Yet somehow they got excellent grades on test full of unfamiliar concepts. They did this by finding and interpreting information in a specification document that is written in the obscure language of process control and quality assurance. They answered questions about concepts that I had never explained (i.e. heel and toe fillets, edge overhang, component cant).
While reviewing the test, I asked students to explain the thought process behind their answers. For most questions, I got not one but several examples of how to substantiate their arguments for their answers. They were also able to point out a question that had no unambiguously correct answers from among the choices. Because they know I can be swayed by well-substantiated arguments, most of the class contributed quotations from the specification demonstrating that this question was poorly posed.
Over the next two weeks, I inspected their soldering and discussed it with them. They fluently used the ideas in the standard to describe their soldering and the faults they found. When I asked a question, they were able to find the answer in the standard. When they were unsure of something, they approached me and pointed to a sentence in the standard, not to the entire thing. But mostly, they were focused on the ideas, not the reading. For some of them, this might be the first time they’ve experienced text as a window, rather than a wall. They used the standard to support their arguments about cost-benefit ratios, manufacturing philosophies, and planned obsolescence.
They read it. They really read it.