You are currently browsing the category archive for the ‘Growth mindset’ category.
It’s that time of year again — the incoming first-year students worked through How Your Brain Learns and Remembers. Some student comments I don’t want to lose track of:
“Of course you can grow your intelligence. How else am I not still at the mental capacity of a newborn?”
“Where do dendrites grow to? Why? How do they know where to grow to?”
“I think I can grow my intelligence with lots of practice and a calm open mind!”
“Though I feel intelligence can grow, I feel it can’t grow by much. Intelligence is your ability to learn, and you can’t just change it on the spot.”
“I think I can grow my intelligence because the older I get the smarter I get and I learn from mistakes.”
“I can definitely become more intelligent or more knowledgeable, otherwise no point in trying to learn. However, everyone is different, so each person could take more time to grow the dendrites and form the proper memories.”
“I think you can grow your intelligence through practice.”
“Do some people’s dendrites build quicker/slower and stronger/weaker than others? Do some people’s break down quicker or slower than others?”
“You should be keeping up through the week but you probably can do homework only on weekends if you really focus.”
“People as a whole are always able to learn. That’s what makes us the dominant species. It’s never too late to teach an old dog a new trick.”
Did you know robots can help us develop growth mindset? It’s true. Machine learning means that not only can robots learn, they can teach us too. To see how, check out this post on Byrdseed. I have no idea why watching videos of robots making mistakes is so funny, but my students and I were all in helpless hysterics after the first minute of this one…
After a quick discussion to refresh our memories about growth mindset and fixed mindset (which I introduced in the fall using this activity), I followed Ian’s suggestion to have the students write letters to the robot. One from each mindset. I collated them into two letters (shown below), which I will bring back to the students tomorrow. All of this feeds into a major activity about writing good quality feedback, and the regular weekly practise of students writing feedback to themselves on their quizzes.
I didn’t show the second minute of the video until after everyone had turned in their letters. But I like Ian’s suggestion of doing that later in the week and writing two new letters… where the fixed mindset has to take it all back.
|Dear robot, try not flipping pancakes. Just stop, you suck. Why don’t you find a better robot to do it for you? You are getting worse. There is no chance for improvement. Give up, just reprogram yourself, you’ll hurt someone. Perhaps you weren’t mean to flip pancakes. Try something else. Maybe discus throwing.||Dear robot, please keep trying to flip the pancake. At least it left the pan on attempt 20. Go take a nap and try again tomorrow. Practise more. Don’t feel bad, I can’t flip pancakes. Keep working, and think of what can help. I see that you’re trying different new techniques and that’s making you get closer. Maybe try another approach. Would having another example help? Is there someone who could give you some constructive feedback? Or maybe have a way to see the pancake, like a motion capture system. That would help you keep track of the pancake as it moves through the air. Keep going, I believe in you!
Siobhan Curious inspired me to organize my thoughts so far about meta-cognition with her post “What Do Students Need to Learn About Learning.” Anyone want to suggest alternatives, additions, or improvements?
One thing I’ve tried is to allow students to extend their due dates at will — for any reason or no reason. The only condition is that they notify me before the end of the business day *before* the due date. This removes the motivation to inflate or fabricate reasons — since they don’t need one. It also promotes time management in two ways: one, it means students have to think one day ahead about what’s due. If they start an assignment the night before it’s due and realize they can’t finish it for some reason, the extension is not available; so they get into the habit of starting things at least two days before the due date. It’s a small improvement, but I figure it’s the logical first baby step!
The other way it promotes time management is that every student’s due dates end up being different, so they have to start keeping their own calendar — they can’t just ask a classmate, since everyone’s got custom due dates. I can nag about the usefulness of using a calendar until the cows come home, but this provides a concrete motivation to do it. This year I realized that my students, most of them of the generation that people complain is “always on their phones”, don’t know how to use their calendar app. I’m thinking of incorporating this next semester — especially showing them how to keep separate “school” and “personal” calendars so they can be displayed together or individually, and also why it’s useful to track both the dates work is due, in addition to the block of time when they actually plan to work on it.
Relating Ideas To Promote Retention
My best attempt at this has been to require it on tests and assignments: “give one example of an idea we’ve learned previously that supports this one,” or “give two examples of evidence from the data sheet that support your answer.” I accept almost any answers here, unless they’re completely unrelated to the topic, and the students’ choices help me understand how they’re thinking.
Organizing Their Notes
Two things I’ve tried are handing out dividers at the beginning of the semester, one per topic… and creating activities that require students to use data from previous weeks or months. I try to start this immediately at the beginning of the semester, so they get in the habit of keeping things in their binders, instead of tossing them in the bottom of a locker or backpack. The latter seems to work better than the former… although I’d like to be more intentional about helping them “file” assignments and tests in the right section of their binders when they get passed back. This also (I hope) helps them develop methodical ways of searching through their notes for information, which I think many students are unfamiliar with because they are so used to being able to press CTRL -F. Open-notes tests also help motivate this.
I also explicitly teach how and when to use the textbook’s table of contents vs index, and give assignments where they have to look up information in the text (or find a practise problem on a given topic), which is surprisingly hard for my first year college students!
Dealing With Failure
Interestingly, I have students who have so little experience with it that they’re not skilled in dealing with it, and others who have experienced failure so consistently that they seem to have given up even trying to deal with it. It’s hard to help both groups at the same time. I’m experimenting with two main activities here: the Marshmallow Challenge and How Your Brain Learns and Remembers (based on ideas similar to Carol Dweck’s “growth mindset”).
Absolute Vs Analytical Ways of Knowing
I use the Foundation for Critical Thinking’s “Miniature Guide To Critical Thinking.” It’s short, I can afford to buy a class set, and it’s surprisingly useful. I introduce the pieces one at a time, as they become relevant. See p. 18 for the idea of “multi-system thinking”; it’s their way of pointing out that the distinction between “opinions” and “facts” doesn’t go far enough, because most substantive questions require us to go beyond right and wrong answers into making a well-reasoned judgment call about better and worse answers — which is different from an entirely subjective and personal opinion about preference. I also appreciate their idea that “critical thinking” means “using criteria”, not just “criticizing.” And when class discussions get heated or confrontational, nothing helps me keep myself and my students focused better than their “intellectual traits” (p. 16 of the little booklet, or also available online here) (my struggles, failures, and successes are somewhat documented Evaluating Thinking).
What the Mind Does While Reading
This is one of my major obsessions. So far the most useful resources I have found are books by Chris Tovani, especially Do I Really Have to Teach Reading? and I Read It But I Don’t Get It. Tovani is a teacher educator who describes herself as having been functionally illiterate for most of her school years. Both books are full of concrete lesson ideas and handouts that can be photocopied. I created some handouts that are available for others to download based on her exercises — such as the Pencil Test and the “Think-Aloud.”
Ideas About Ideas
While attempting these things, I’ve gradually learned that many of the concepts and vocabulary items about evaluating ideas are foreign to my students. Many students don’t know words like “inference”, “definition”, “contradiction” (yes, I’m serious), or my favourite, “begging the question.” So I’ve tried to weave these into everything we do, especially by using another Tovani-inspired technique — the “Comprehension Constructor.” The blank handout is below, for anyone who’d like to borrow it or improve it.
To see some examples of the kinds of things students write when they do it, click through:
The latest post at Educating Grace is about breaking down classroom cultures of “fishing for or steering people towards the right answer, treating wrong answers as dangerous, only valuing people who give right answers.” My comment got so long that Blogger wouldn’t accept it — so I’m posting here instead.
Grace starts by posting a short video clip of a PD session with math teachers, focusing on a moment where a math teacher tries to come up with a non-standard algorithm but ends up getting the wrong answer. You should go watch it now.
I actually found it hard to watch. I felt uncomfortable with the responding teacher’s growing embarrassment, as well as with the vocal performance of her embarrassment. In the moment, I interpreted the stage-whispers she shared with a seat-mate as a way of letting the rest of the room know that she knew, of course she knew why it was wrong. Which goes back to Grace’s point — we are in a culture where it would be shameful not to know, where mistakes require some gesture of face-saving. If it was uncomfortable for me to watch, it makes me think of how squirmy it must make students…
1. I wanted to spend less time unpacking the idea when it was first mentioned, not more. Maybe because she loses face more for every minute she continues to make the mistake? But maybe also because asking someone to repeat their point is (in the generic classroom of my imagination) often a cue that the teacher wants you to say something else.
So I was imagining myself writing down her process as soon as she said it, and collecting more. Sometimes I have had success undermining the cult of correctness by putting some distance between the speaker and the strategy. After there are 5 or 6 strategies on the board, especially if they don’t all match, I can go back and ask students to think about the pros and cons of each one.
2. Another strategy that sometimes helps me is getting people to pool their answers in small groups, and report back as a group. This doesn’t solve the problem — they will still tend to correct each other, argue, and be mortified if their solution is different from their group-mates’ — but it means the loss of face happens in front of fewer people, where it might be more manageable.
Sometimes I explicitly help students practise recording all the strategies from their group and reporting all of them — I encourage them to discuss the differences without trying to convince the others. Their default strategy for listening is often “decide whether it’s right or wrong”, so just telling them to stop doing that doesn’t work as well as giving them something else to do: “try to figure out why a reasonable person might think that.”
3. Another thing I don’t do as often as I would like is asking people to record all the strategies that they think work, and then strategies that look plausible but don’t work. Recording *all* of them on the board, asking people not to say which ones are which, helps break down the assumption that all things written on boards are automatically true. This is somewhat inspired by Kelly O’Shea’s “Mistake Game“.
When we are looking through strategies that don’t work, I go back to the class with “why might a reasonable person think this.” With teachers, we might be able to deflect attention away from ourselves by asking, “This is a tempting strategy that a student could easily use. Why might a student think this? If they did, what could help them sort it out?”
A related approach is, “what question is this the right answer to?” Where one commenter on the original post found something good about the strategy’s algebra, I’m finding something good about the strategy’s heuristics. I’m not thinking about what you should actually multiply the denominator by. I’m thinking, “that would be a good strategy if we were maintaining the same speed and trying to figure out how far we got in 1.5 hours (27 miles).” In this question we’re maintaining the same distance, and asking how fast, not how far…. but it’s still an example of using the previous problem to solve a new one.
In the debrief, I found myself wanting to talk about, how easy it is to answer a different question than we meant to, especially if we’re trying to do things in a new way. This must happen to students all the time — they likely have some experience with speed and distance, and some comfortable ways of thinking about them. We’re asking them to think about familiar things in unfamiliar ways, and that’s going to be disorienting. It points to the idea that we have to be careful in our assessments — just because someone gives that kind of answer, it doesn’t mean they don’t understand speed and distance. In fact, it might be an indicator of a new layer of connectedness in our thinking — similar to what Brian Frank refers to as “U-shaped development.”
The fact that the responding teacher was deliberately trying to come up with a non-standard algorithm shows intellectual courage and autonomy, traits I want to encourage in my students. What helped her develop that courage? How could we help our students develop it? I’d be curious to hear the answers from the teachers in the PD session.
I’m thinking about how to make assessments even lower stakes, especially quizzes. Currently, any quiz can be re-attempted at any point in the semester, with no penalty in marks. For a student who’s doing it for the second time, I require them to correct their quiz (if it was a quiz) and complete two practise problems, in order to apply for reassessment. (FYI, it can also be submitted in any alternate format that demonstrates mastery, in lieu of a quiz, but students rarely choose that option).
The upside of requiring practise problems is eliminating the brute-force approach where students just keep randomly trying quizzes thinking they will eventually show mastery (this doesn’t work, but it wastes a lot of time). It also introduces some self-assessment into the process. We practise how to write good-quality feedback, including trying to figure out what caused them to make the mistake.
The downside is that the workload in our program is really unreasonable (dear employers of electronics technicians, if you are reading this, most hard-working beginners cannot go from zero to meeting your standards in two years. Please contact me to discuss). So, students are really upset about having to do two practise problems. I try to sell it as “customized homework” — since I no longer assign homework practise problems, they are effectively exempting themselves from any part of the “homework” in areas where they have already demonstrated proficiency. The students don’t buy it though. They put huge pressure on themselves to get things right the first time, so they won’t have to do any practise. That, of course, sours our classroom culture and makes it harder for them to think well.
I’m considering a couple of options. One is, when they write a quiz, to ask them whether they are submitting it to be evaluated or just for feedback. Again, it promotes self-assessment: am I ready? Am I confident? Is this what mastery looks and feels like?
If they’re submitting for feedback, I won’t enter it into the gradebook, and they don’t have to submit practise problems when they try it next (but if they didn’t succeed that time, it would be back to practising).
Another option is simply to chuck the practise problem requirement. I could ask for a corrected quiz and good quality diagnostic feedback (written by themselves to themselves) instead. It would be a shame, the practise really does benefit them, but I’m wondering if it’s worth it.
All suggestions welcome!
I’ve done a better job of launching our inquiry into electricity than I did last year. The key was talking about atoms (which leads to thoughts of electrons), not electricity (which leads to thoughts of how to give someone else an electric shock from an electric fence, lightning, and stories students have heard about death by electrocution).
The task was simple: “Go learn something about electrons, about atoms, and about electrical charge. For each topic, use at least one quote from the textbook, one online source, and one of your choice. Record them on our standard evidence sheets — you’ll need 9 in total. You have two hours. Go.”
I’ve used the results of that 2-hour period to generate all kinds of activities, including
- group discussions
- whiteboarding sessions
- skills for note-taking
- what to do when your evidence conflicts
- how to decide whether to accept a new idea
We practiced all the basic critical thinking skills I hope to use throughout the semester:
- asking questions about something even before you fully understand it
- identifying cause and effect
- getting used to saying “I don’t know”
- connecting in-school-knowledge to outside-school experiences
- distinguishing one’s own ideas from a teacher’s or an author’s
I’m really excited about the things the students have gotten curious about so far.
“When an electron jumps from one atom to the next, why does that cause an electric current instead of a chemical reaction?”
“When an electron becomes a free electron, where does it go? Does it always attach to another atom? Does it hang out in space? Can it just stay free forever?”
“What makes electrons negative? Could we change them to positive?”
“Are protons the same in iron as they are in oxygen? How is it possible that protons, if they are all the same, just by having more or fewer of them, make the difference between iron and oxygen?”
“If we run out of an element, say lithium, is there a way to make more?”
“Why does the light come on right away if it takes so long for electrons to move down the wire?”
“What’s happening when you turn off the lights? Where do the electrons go? Why do they stop moving?”
“What’s happening when you turn on the light? Something has to happen to push that electron. Is there a new electron in the system?”
“With protons repelling each other and being attracted to electrons, what keeps the nucleus from falling apart?”
“What happens if you somehow hold protons and electrons apart?”
“Would there be no gravity in that empty space in the atom? I like how physics are the same when comparing a tiny atom and a giant universe.”
This week, I’ve been working on Jo Boaler’s MOOC “How To Learn Math.” It’s presented via videos, forum discussions, and peer assessment; registration is still open, for those who might be interested.
They’re having some technical difficulties with the discussion forum, so I thought I would use this space to open up the questions I’m wondering about. You don’t need to be taking the course to contribute; all ideas welcome.
Student Readiness for College Math
According to Session 1, math is a major stumbling block in pursuing post-secondary education. I’m assuming the stats are American; if you have more details about the research that generated them, please let me know!
Percentage of post-secondary students who go to 2-year colleges: 50%
Percentage of 2-year college students who take at least one remedial math course: 70%
Percentage of college remedial math students who pass the course: 10%
The rest, apparently, leave college. The first question we were asked was, what might be causing this? People hazarded a wide variety of guesses. I wonder who collected these stats, and what conclusions they drew, if any?
The next topic we discussed was the unusual degree of math trauma. Boaler says this:
“When [What’s Math Got To Do With It] came out, I was [interviewed] on about 40 different radio stations across the US and BBC stations across the UK. And the presenters, almost all of them, shared with me their own stories of math trauma.”
Boaler goes on to quote Kitty Dunne, reporting on Wisconsin Radio: “Why is math such a scarring experience for so many people? … You don’t hear of… too many kids with scarring English class experience.” She also describes applications she received for a similar course she taught at Stanford, for which the 70 applicants “all wrote pretty much the same thing. that I used to be great at maths, I used to love maths, until …”.
The video describes the connection that is often assumed about math and “smartness,” as though being good at English just means you’re good at English but being good at Math means you’re “smart.” But that’s just begging the question. Where does that assumption come from? Is this connected to ideas from the Renaissance about science, intellectualism, or abstraction?
There was a brief discussion of stereotype threat: the idea that students’ performance declines when they are reminded that they belong to a group that is stereotyped as being poor at that task. For example, when demographic questions appear at the top of a standardized math test, there is a much wider gender gap in scores than when those questions aren’t asked. It can also happen just through the framing of the task. An interesting example was when two groups of white students were given a sports-related task. The group that was told it measured “natural athletic ability” performed less well than a group of white students who were not told anything about what it measured.
Boaler mentions, “researchers have found the gender and math stereotype to be established in girls as young as five years old. So they talk about the fact that young girls are put off from engaging in math before they have even had a chance to engage in maths.”
How are pre-school girls picking this stuff up? It can’t be the school system. And no, it’s not the math-hating Barbie doll (which was discontinued over 20 years ago). I’m sure there’s the odd parent out there telling their toddlers that girls can’t do math, but I doubt that those kinds of obvious bloopers can account for the ubiquity of the phenomenon. There are a lot of us actually trying to prevent these ideas from taking hold in our children (sisters/nieces/etc.) and we’re failing. What are we missing?
July 22 Update: Part of what’s interesting to me about this conversation is that all the comments I’ve heard so far have been in the third person. No one has yet identified something that they themselves did, accidentally or unknowingly, that discouraged young women from identifying with math. I’m doing some soul-searching to try to figure out my own contributions. I haven’t found them, but it seems like this is the kind of thing that we tend to assume is done by other people. Help and suggestions appreciated — especially in the first person.
Interventions That Worked
Boaler describes two interventions that had a statistically significant effect. One was in the context of a first-draft essay for which students got specific, critical feedback on how to improve. Some students also randomly received this line at the end of the feedback: “I am giving you this feedback because I believe in you.” Teachers did not know which students got the extra sentence.
The students who found the extra sentence in their feedback made more improvements and performed better in that essay. They also, check this out, “achieved significantly better a year later.” And to top it all off, “white students improved, but African-American students, they made significant improvements…” It’s not completely clear, but she seems to be suggesting that the gap narrowed between the average scores of the two groups.
The other intervention was to ask seventh grade students at the beginning of the year to write down their values, including what they mean to that student and why they’re important. A control group was asked to write about values that other people had and why they thought others might have those values.
Apparently, the students who wrote about their own values had, by the end of the year, a 40% smaller racial achievement gap than the control group.
Holy smoke. This just strikes me as implausible. A single intervention at the beginning of the year having that kind of effect months later? I’m not doubting the researchers (nor am I vouching for them; I haven’t read the studies). But assuming it’s true, what exactly is happening here?
This just in from dy/dan: Jo Boaler (Stanford prof, author of What’s Math Got to Do With It and inspiration for Dan Meyer’s “pseudocontext” series) is offering a free online course for “teachers and other helpers of math learners.” The course is called “How To Learn Math.”
“The course is a short intervention designed to change students’ relationships with math. I have taught this intervention successfully in the past (in classrooms); it caused students to re-engage successfully with math, taking a new approach to the subject and their learning. In the 2013-2014 school year the course will be offered to learners of math but in July of 2013 I will release a version of the course designed for teachers and other helpers of math learners, such as parents…” [emphasis is original]
I’ve been disheartened this year to realize how limited my toolset is for convincing students to broaden their thinking about the meaning of math. Every year, I tangle with students’ ingrained humiliation in the face of their mistakes and sense of worthlessness with respect to mathematical reasoning. I model, give carefully crafted feedback, and try to create low-stakes ways for them to practice analyzing mistakes, understanding why math in physics gives us only “evidence in support of a model” — not “the right answer”, and noticing the necessity for switching representations. This is not working nearly as well as it needs to for students to make the progress they need and that I believe they are capable of.
I hope this course will give me some new ideas to think about and try, so I’ve signed up. I’m especially interested in the ways Boaler is linking these ideas to Carol Dweck’s ideas about “mindset,” and proposing concrete ideas for helping students develop a growth mindset.
Anyone else interested?
On Exploring RC Circuits and trying to figure out why the capacitor charges faster than it discharges
Student 1: “Is the charge time always the same as the discharge time?”
Me: “According to this model, it is, if the resistance and capacitance haven’t changed.”
Student 2: “I’ve got data where the charge time was short and the discharge time was long.”
Me: “Why would a reasonable teacher say something that contradicts your data?”
Student 3, excitedly: “What circuit was it? Was there anything else in the circuit?”
Student 1: “I can’t remember what it was called — it had a resistor, a capacitor, and a diode.”
Student 2: “That’s it then! The diode — it’s changing its resistance!”
Student 1: “Yes — it goes from acting like a short to acting like an open. Thanks for bringing that up [Classmate’s Name] — I just answered a HUGE question from that lab!”
Student services counsellor who sat in for a day
“You’re challenging my whole idea about science.”
While exploring why capacitors act like more and more resistance as they charge
“Maybe the negative side of the cap is filling up with electrons, which means less capacitance. According to the ‘tau model’, charge time = 5 * R * C. So if the charge time never changes, and the capacitance is going down, then the resistance must be going up.”
[I’m excited about this because, although it shows a misunderstanding of the definition of capacitance, the student is tying together a lot of new ideas. They are also using proportional reasoning and making sense of the story behind a formula. I need a better way to help students feel proud of things like this…]
Student critique of a Wikipedia page
“There’s some great begging the question, right there!”
Student analyzing the mistake in their thinking about a resistor-diode circuit
“I didn’t think of current not flowing at all during the negative alternation of the source. This would mean that the direction of current through the resistor does not technically change. I thought that if current was flowing through the resistor, it would change direction even if there is a very small amount of current flowing. I did do a good job about thinking of the electrons already in the wires.”
One student’s feedback on another student’s paper
“I understand fully what you are trying to explain!”
On figuring out why a diode works
“If you make the connection to a wire, it’s like how copper atoms…”
“If it wasn’t doped, wouldn’t current flow in both directions?”
Students discussing a shake-to-charge flashlight they are designing
“In our rechargeable flashlight, if you put the switch in parallel with the diode, when it’s closed it will just short it out…”
Student who gave a recruiting presentation at a high school
“The day was a great step up for me that I never ever thought possible. To be able to go back to the high school where I am pretty sure most had given up hope on me and see and hear them tell me how proud they are of me for where I am today is a feeling I will never forget.”
As the year winds down, I’m starting to pull out some specific ideas that I want to work on over the summer/next year. The one that presses on me the most is “readiness.” In other words,
- What is absolutely non-negotiable that my students should be able to do or understand when they graduate?
- How to I make sure they get the greatest opportunity to learn those things?
- How do I make sure no one graduates without those things? And most frustratingly,
- How do I reconcile the student-directedness of inquiry learning with the requirements of my diploma?
Some people might disagree that some of these points are worth worrying about. If you don’t teach in a trade school, these questions may be irrelevant or downright harmful. K-12 education should not be a trade school. Universities do not necessarily need to be trade schools (although arguably, the professional schools like medicine and law really are, and ought to be). However, I DO teach in a trade school, so these are the questions that matter to me.
Training that intends to help you get a job is only once kind of learning, but it is a valid and important kind of learning for those who choose it. It requires as much rigour and critical thinking as anything else, which becomes clear when we consider the faith we place in the electronics technicians who service elevators and aircraft. If my students are inadequately prepared in their basic skills, they (or someone else, or many other people) may be injured or die. Therefore, I will have no truck with the intellectual gentrification that thinks “vocational” is a dirty word. Whether students are prepared for their jobs is a question of the highest importance to me.
In that light, my questions about job-readiness have reached the point of obsession. Being a technician is to inquire. It is to search, to question, to notice inconsistencies, to distinguish between conditions that can and cannot possibly be the cause of particular faults. However, teaching my students to inquire means they must inquire. I can’t force it to happen at a particular speed (although I can cut it short, or offer fewer opportunities, etc.). At the same time, I have given my word that if they give me two years of their time, they will have skills X, Y, and Z that are required to be ready for their jobs. I haven’t found the balance yet.
I’ll probably write more about this as I try to figure it out. In the meantime, Grant Wiggins is writing about a recent study that found a dramatic difference between high-school teachers’ assessment of students’ college readiness, and college profs’ assessment of the same thing. Wiggins directs an interesting challenge to teachers: accurately assess whether students are ready for what’s next, by calibrating our judgement against the judgement of “whatever’s next.” In other words, high school teachers should be able to predict what fraction of their students are adequately prepared for college, and that number should agree reasonably well with the number given by college profs who are asked the same question. In my case, I should be able to predict how well prepared my students are for their jobs, and my assessment should match reasonably the judgement of their first employer.
In many ways I’m lucky: we have a Program Advisory Group made up of employer representatives who meet to let us know what they need. My colleagues and I have all worked between 15 and 25 years in our field. I send all my students on 5-week unpaid work terms. During and after the work terms, I meet with the student and the employer, and get a chance to calibrate my judgement. There’s no question that this is a coarse metric; the reviews are influenced by how well the student is suited to the culture of a particular employer, and their level of readiness in the telecom field might be much higher than if they worked on motor controls. Sometimes employers’ expectations are unreasonably high (like expecting electronics techs to also be mechanics). There are some things employers may or may not expect that I am adamant about (for example, that students have the confidence and skill to respond to sexist or racist comments). But overall, it’s a really useful experience.
Still, I continue to wonder about the accuracy of my judgement. I also wonder about how to open this conversation with my colleagues. It seems like something it would be useful to work on together. Or would it? The comments on Wiggins’ post are almost as interesting as the post itself.
It seems relevant that most commenters are responding to the problem of students’ preparedness for college, while Wiggins is writing about a separate problem: teachers’ unfounded level of confidence about students’ preparedness for college.
The question isn’t, “why aren’t students prepared for college.” It’s also not “are college profs’ expectations reasonable.” It’s “why are we so mistaken about what college instructors expect?”
My students, too, often miss this kind of subtle distinction. It seems that our students aren’t the only ones who suffer from difficulty with close reading (especially when stressed and overwhelmed).
Wiggins calls on teachers to be more accurate in our assessment, and to calibrate our assessment of college-readiness against actual college requirements. I think these are fair expectations. Unfortunately, assessment of students’ college-readiness (or job-readiness) is at least partly an assessment of ourselves and our teaching.
A similar problem is reported about college instructors. The study was conducted by the Foundation for Critical Thinking with both education faculty and subject-matter faculty who instruct teacher candidates. They write that many profs are certain that their students are leaving with critical thinking skills, but that most of those same profs could not clearly explain what they meant by critical thinking, or give concrete examples of how they taught it.
Self-assessment is surprisingly intractable; it can be uncomfortable and can elicit self-doubt and anxiety. My students, when I expect them to assess their work against specific criteria, exhibit all the same anger, defensiveness, and desire to change the subject as seen in the comments. Most of them literally can’t do it at first. It takes several drafts and lots of trust that they will not be “punished” for admitting to imperfection. Carol Dweck’s work on “growth mindset” comes to mind here… is our collective fear of admitting that we have room to grow a consequence of “fixed mindset”? If so, what is contributing to it? In that light, the punitive aspects of NCLB (in the US) or similar systemic teacher blaming, isolation, and lack of integrated professional development may in fact be contributing to the mis-assessment reported in the study, simply by creating lots of fear and few “sandboxes” of opportunity for development and low-risk failure. As for the question of whether education schools are providing enough access to those experiences, it’s worth taking a look at David Labaree’s “The Trouble with Ed School.”
One way to increase our resilience during self-assessment is to do it with the support of a trusted community — something many teachers don’t have. For those of us who don’t, let’s brainstorm about how we can get it, or what else might help. Inaccurate self-assessment is understandable but not something we can afford to give up trying to improve.
I’m interested in commenter I Hodge’s point about the survey questions. The reading comprehension question allowed teachers to respond that “about half,” “more than half,” or “all, or nearly all” of their students had an adequate level of reading comprehension. In contrast, the college-readiness question seems to have required a teacher to select whether their students were “well,” “very well,” “poorly,” or “very poorly” prepared. This question has no reasonable answer, even if teachers are only considering the fraction of students who actually do make it to college. I wonder why they posed those two questions so differently?
Last but not least, I was surprised that some people blamed college admissions departments for the admission of underprepared students. Maybe it’s different in the US, but my experience here in Canada is that admission is based on having graduated high school, or having gotten a particular score in certain high school courses. Whether under-prepared students got those scores because teachers under-estimated the level of preparation needed for college, or because of rigid standards or standardized tests or other systemic problems, I don’t see how colleges can fix, other than by administering an entrance test. Maybe that’s more common than I know, but neither the school at which I teach nor the well-reputed university that I (briefly) attended had one. Maybe using a high school diploma as the entrance exam for college/university puts conflicting requirements on the K-12 system? I really don’t know the answer to this.
Wiggins recommends regularly bringing together high-school and college faculty to discuss these issues. I know I’d be all for it. There is surely some skill-sharing that could go back and forth, as well as discussions of what would help students succeed in college. Are we ready for this?