You are currently browsing the category archive for the ‘Perplexed’ category.
A local media outlet recently wrote
“Why the constant, often blatant lying? For one thing, it functioned as a means of fully dominating subordinates, who would have to cast aside all their integrity to repeat outrageous falsehoods and would then be bound to the leader by shame and complicity. “The great analysts of truth and language in politics” — writes McGill University political philosophy professor Jacob T. Levy — including “George Orwell, Hannah Arendt, Vaclav Havel — can help us recognize this kind of lie for what it is…. Saying something obviously untrue, and making your subordinates repeat it with a straight face in their own voice, is a particularly startling display of power over them. It’s something that was endemic to totalitarianism.”
How often does this happen in our classrooms? How often do we require students to memorize and repeat things they actually think are nonsense?
- “Heavy things fall at the same speed as light things.” (Sure, whatever.)
- “An object in motion will stay in motion forever unless something stops it.” (That’s ridiculous. Everyone knows that everything stops eventually. Even planets’ orbits degrade.).
- When you burn propane, water comes out. (Pul-lease.)
- The answer to “in January of the year 2000, I was one more than eleven times as old as my son William while in January of 2009, I was seven more than three times as old as him” is somehow not, “why do you not know the age of your own kid?“
Real conversation I had with a class a few years ago:
Me: what do you think so far about how weight affects the speed that things fall?
Students (intoning): “Everything falls at the same speed.”
Me: So, do you think that’s weird?
Me: But, this book… I can feel the heaviness in my hand. And this pencil, I can barely feel it at all. It feels like the book is pulling harder downward on my hand than the pencil is. Why wouldn’t that affect the speed of the fall?”
Student: “It’s not actually pulling harder. It just feels that way, but that’s weight, not mass.”
Me: (weeps quietly)
Please don’t lecture me about the physics. I’m aware. Please also don’t lecture me about the terrible fake-Socratic-teaching I’m doing in that example dialogue. I’m aware of that too. I’m just saying that students often perceive these to contradict their lived experience, and research shows that outside of classrooms, even those who said the right things on the test usually go right back to thinking what they thought before.
And no, I’m not comparing the role of teachers to the role of Presidents or Prime Ministers. I do realize they’re different.
Should I Conclude Any of These Things?
- Students’ ability to fail to retain or synthesize things that don’t make sense to them is actually a healthful and critically needed form of resistance.
- When teachers complain about students and “just memorizing what they need for the test and forgetting it after, without trying to really digest the material,” what we are complaining about is their fascism-prevention mechanism
- Teachers have the opportunity to be the “warm up,” the “opening act” — the small-scale practice ground where young minds practice repeating things they don’t believe, thinking they can safely forget them later.
- Teachers have the opportunity to be the “innoculation” — the small-scale practice ground where young minds can practice “honoring their dissatisfaction” in a way that, if they get confident with it, might have a chance at saving their integrity, their souls, and their democracy.
Applying this train of thought to the conventional ways of doing corporate diversity training is left as an exercise for the reader.
Creating a classroom culture of inquiry is getting better and better every September in most ways. It’s especially working well to reassure the students with little previous physics experience, to excite the students with previous unpleasant experiences with physics, to challenge the students who found previous physics classes boring or stifling, and to empower students who’ve been marginalized by schooling in general. But one thing I’m still struggling with is responding well to the students who have been taught to uncritically regurgitate correct answers — and who’ve embraced it.
How do I get curious about their ideas? My conflict mediation coach suggests finding out what need that meets, what they got from that experience that they’re not getting elsewhere. I confess that I’m afraid to find out. I’m also afraid of the effect they have on the other students. Their dismissive insistence that other people’s theories are “wrong” can quickly undo weeks of carefully cultivating a spirit of exploring and evaluating the evidence ourselves; their pat answers to other people’s questions make it seem like it’s stupid to be curious at all.
I have a bunch of options here… one is an activity called “Thinking Like a Technician” where I introduce the idea that “believing” is different from provisionally accepting the theory best supported by the current evidence. I show the Wikipedia page for atomic theory to draw out the idea that there are many models of the atom, that all of them are a little bit wrong, and that our job is to choose which one we need for which situations, rather than to figure out which one is right. That seems to help a bit, and give us some points of reference to refer back to.
I show a video with Malcolm Longair and Michio Kaku explaining that atoms are made of mostly nothingness. But I think it makes it worse. The students who are excited get more excited; the ones who feel like I’m threatening the authority of the high school physics teachers they idolize get even angrier. For the rest of the class, it’s wonderful — but for this subset, it’s uncomfortably close to Elicit-Confront-Resolve. They experience it as a form of “expose-and-shame“, and unsurprisingly retaliate. If they can’t find some idea of mine to expose and shame, they’ll turn on the other students.
Something I’m trying to improve: How do I help students re-evaluate things that seem solid? It’s not just that they respond with defensiveness; they also tend to see the whole exercise of inquiry (or, as some people call it, “science”) as a waste of time. What could make it worth re-examining the evidence when you’re that sure?
Here are some conversations that come up every year.
1. Zero Current
Student: “I tried to measure current, but I couldn’t get a reading.”
Me: “So the display was blank?”
Student: “No, it just didn’t show anything.”
(Note: Display showed 0.00)
2. Zero Resistance
Student: “We can’t solve this problem, because an insulator has no resistance.”
Me: “So it has zero ohms?”
Student: “No, it’s too high to measure.”
3. Zero Resistance, In a Different Way
Student: “In this circuit, X = 10, but we write R = 0 because the real ohms are unknown.”
(Note: The real ohms are not unknown. The students made capacitors out of household materials last week, so they have previously explored that the plates have approx. 0 and the dielectric is considered open)
4. Zero Resistance Yet Another Way
Student: “I wrote zero ohms in my table for the resistance of the battery since there’s no way to measure it.”
What I Wonder
- Are students thinking about zero as indicator that means “error” or “you’re using the measuring tool wrong?” A bathroom scale might show zero if you weren’t standing on it. A gas gauge shows zero when the car isn’t running.
- When students say “it has none” like in example 2, what is it that there is none of? They might mean “it has no known value”, which might be true, as a opposed to “it has no resistance.”
- Is this related to a need for more concreteness? For example, would it help if we looked up the actual resistance of common types of insulation, or measured it with a megger? That way we’d have a number to refer to.
- #3 really stumps me. Is this a way of using “unknown” because they’re thinking of the dielectric as an insulator that is considered “open”, so that #3 is just a special case of #2? Or is it unknown because the plates are considered to have 0 resistance and the dielectric is considered open, so we “don’t know” the resistance because it’s both at the same time? The particular student who said that one finds it especially hard to express his reasoning and so couldn’t elaborate when I tried to find out where he was coming from.
- Why does this come up so often for resistance, and sometimes for current, but I can’t think of a single example for voltage? I suspect it’s because both resistance and current feel concrete and like real phenomena that they could visualize, so they’re more able to experiment with its meaning. I think they’re avoiding voltage altogether (first off, it’s about energy, which is weird in the first place, and then it’s a difference of energies, which makes it even less concrete because it’s not really the amount of anything — just the difference between two amounts, and then on top of that we never get to find out what the actual energies are, only the difference between them — which makes it even more abstract and hard to think about).
- Since this comes up over and over about measurement, is it related to seeing the meter as an opaque, incomprehensible device that might just lie to you sometimes? If so, this might be a kind of intellectual humility, acknowledging that they don’t fully understand how the meter works. That’s still frustrating to me though, because we spend time at the beginning of the year exploring how the meter works — so they actually do have the information to explain what inside the meter could show a 0A reading. Maybe those initial explanations about meters aren’t concrete enough — perhaps we should build one. Sometimes students assume explanations are metaphors when actually they’re literal causes.
- Is it related to treating automated devices in general as “too complicated for normal people to understand”? If that what I’m reading into the situation, it explains why I have weirdly disproportionate irritation and frustration — I’m angry about this as a social phenomenon of elitism and disempowerment, and I assess the success of my teaching partly on the degree to which I succeed in subverting it… both of which are obviously not my students’ fault.
One possibility is that they’re actually proposing an idea similar to the database meaning of “null” — something like unknown, or undefined, or “we haven’t checked yet.”
I keep suspecting that this is about a need for more symbols. Do we need a symbol for “we don’t know”? It should definitely not be phi, and not the null symbol — it needs to look really different from zero. Question mark maybe?
If students are not used to school-world tasks where the best answer is “that’s not known yet” or “that’s not measurable with our equipment”, they may be in the habit of filling in the blank. If that’s the case, having a place-holder symbol might help.
This year, I’ve really started emphasizing the idea that zero, in a measurement, really means “too low to measure”. I’ve also experimented with guiding them to decipher the precision of their meters by asking them to record “0.00 mA” as “< 5uA”, or whatever is appropriate for their particular meter. It helps them extend their conceptual fluency with rounding (since I am basically asking them to “unround”); it helps us talk about resolution, and it can help in our conversation about accuracy and error bars. Similarly, “open” really means “resistance is too high to measure” (or relatedly, too high to matter) — so we find out what their particular meter can measure and record it as “>X MOhms”.
The downfall there is they start to want to use those numbers for something. They have many ways of thinking about the “unequal” signs and one of them is to simply make up a number that corresponds to their idea of “significantly bigger”. For example, when solving a problem, if they’re curious about whether electrons are actually flowing through air, they may use Ohm’s law and plug in 2.5 MOhms for the resistance of air. At first I rolled with it, because it was part of a relevant, significant, and causal line of thinking. The trouble was that I then didn’t know how to respond when they started assuming that 2.5MOhms was the actual resistance of air (any amount of air, incidentally…), and my suggestion that air might also be 2.0001 MOhms was met with resistance. (Sorry, couldn’t resist). (Ok, I’ll stop…)
I’m afraid that this is making it hard for them to troubleshoot. Zero current, in particular, is an extremely informative number — it means the circuit is open somewhere. That piece of information can solve your problem, if you trust that your meter is telling you a true and useful thing. But if you throw away that piece of information as nonsense, it both reduces your confidence in your measurements, and prevents you from solving the problem.
Some Responses I Have Used
“Yes, your meter is showing 0.00 because there is 0.00 A of current flowing through it.”
“Don’t discriminate against zero — it isn’t nothing, it’s something important. You’ll hurt its feelings!”
Not helpful, I admit! If inquiry-based learning means that “students inquire into the discipline while I inquire into their thinking”*, neither of those is happening here.
Some Ideas For Next Year
- Everyone takes apart their meter and measures the current, voltage, and resistance of things like the current-sense resistor, the fuse, the leads…
- Insist on more consistent use of “less than 5 uA” or “greater than 2MOhms” so that we can practise reasoning with inequalities
- “Is it possible that there is actually 0 current flowing? Why or why not?”
- Other ideas?
*I stole this definition of inquiry-based learning from Brian Frank, on a blog post that I have never found again… point me to the link, someone!
This week, I’ve been working on Jo Boaler’s MOOC “How To Learn Math.” It’s presented via videos, forum discussions, and peer assessment; registration is still open, for those who might be interested.
They’re having some technical difficulties with the discussion forum, so I thought I would use this space to open up the questions I’m wondering about. You don’t need to be taking the course to contribute; all ideas welcome.
Student Readiness for College Math
According to Session 1, math is a major stumbling block in pursuing post-secondary education. I’m assuming the stats are American; if you have more details about the research that generated them, please let me know!
Percentage of post-secondary students who go to 2-year colleges: 50%
Percentage of 2-year college students who take at least one remedial math course: 70%
Percentage of college remedial math students who pass the course: 10%
The rest, apparently, leave college. The first question we were asked was, what might be causing this? People hazarded a wide variety of guesses. I wonder who collected these stats, and what conclusions they drew, if any?
The next topic we discussed was the unusual degree of math trauma. Boaler says this:
“When [What’s Math Got To Do With It] came out, I was [interviewed] on about 40 different radio stations across the US and BBC stations across the UK. And the presenters, almost all of them, shared with me their own stories of math trauma.”
Boaler goes on to quote Kitty Dunne, reporting on Wisconsin Radio: “Why is math such a scarring experience for so many people? … You don’t hear of… too many kids with scarring English class experience.” She also describes applications she received for a similar course she taught at Stanford, for which the 70 applicants “all wrote pretty much the same thing. that I used to be great at maths, I used to love maths, until …”.
The video describes the connection that is often assumed about math and “smartness,” as though being good at English just means you’re good at English but being good at Math means you’re “smart.” But that’s just begging the question. Where does that assumption come from? Is this connected to ideas from the Renaissance about science, intellectualism, or abstraction?
There was a brief discussion of stereotype threat: the idea that students’ performance declines when they are reminded that they belong to a group that is stereotyped as being poor at that task. For example, when demographic questions appear at the top of a standardized math test, there is a much wider gender gap in scores than when those questions aren’t asked. It can also happen just through the framing of the task. An interesting example was when two groups of white students were given a sports-related task. The group that was told it measured “natural athletic ability” performed less well than a group of white students who were not told anything about what it measured.
Boaler mentions, “researchers have found the gender and math stereotype to be established in girls as young as five years old. So they talk about the fact that young girls are put off from engaging in math before they have even had a chance to engage in maths.”
How are pre-school girls picking this stuff up? It can’t be the school system. And no, it’s not the math-hating Barbie doll (which was discontinued over 20 years ago). I’m sure there’s the odd parent out there telling their toddlers that girls can’t do math, but I doubt that those kinds of obvious bloopers can account for the ubiquity of the phenomenon. There are a lot of us actually trying to prevent these ideas from taking hold in our children (sisters/nieces/etc.) and we’re failing. What are we missing?
July 22 Update: Part of what’s interesting to me about this conversation is that all the comments I’ve heard so far have been in the third person. No one has yet identified something that they themselves did, accidentally or unknowingly, that discouraged young women from identifying with math. I’m doing some soul-searching to try to figure out my own contributions. I haven’t found them, but it seems like this is the kind of thing that we tend to assume is done by other people. Help and suggestions appreciated — especially in the first person.
Interventions That Worked
Boaler describes two interventions that had a statistically significant effect. One was in the context of a first-draft essay for which students got specific, critical feedback on how to improve. Some students also randomly received this line at the end of the feedback: “I am giving you this feedback because I believe in you.” Teachers did not know which students got the extra sentence.
The students who found the extra sentence in their feedback made more improvements and performed better in that essay. They also, check this out, “achieved significantly better a year later.” And to top it all off, “white students improved, but African-American students, they made significant improvements…” It’s not completely clear, but she seems to be suggesting that the gap narrowed between the average scores of the two groups.
The other intervention was to ask seventh grade students at the beginning of the year to write down their values, including what they mean to that student and why they’re important. A control group was asked to write about values that other people had and why they thought others might have those values.
Apparently, the students who wrote about their own values had, by the end of the year, a 40% smaller racial achievement gap than the control group.
Holy smoke. This just strikes me as implausible. A single intervention at the beginning of the year having that kind of effect months later? I’m not doubting the researchers (nor am I vouching for them; I haven’t read the studies). But assuming it’s true, what exactly is happening here?
This just in from dy/dan: Jo Boaler (Stanford prof, author of What’s Math Got to Do With It and inspiration for Dan Meyer’s “pseudocontext” series) is offering a free online course for “teachers and other helpers of math learners.” The course is called “How To Learn Math.”
“The course is a short intervention designed to change students’ relationships with math. I have taught this intervention successfully in the past (in classrooms); it caused students to re-engage successfully with math, taking a new approach to the subject and their learning. In the 2013-2014 school year the course will be offered to learners of math but in July of 2013 I will release a version of the course designed for teachers and other helpers of math learners, such as parents…” [emphasis is original]
I’ve been disheartened this year to realize how limited my toolset is for convincing students to broaden their thinking about the meaning of math. Every year, I tangle with students’ ingrained humiliation in the face of their mistakes and sense of worthlessness with respect to mathematical reasoning. I model, give carefully crafted feedback, and try to create low-stakes ways for them to practice analyzing mistakes, understanding why math in physics gives us only “evidence in support of a model” — not “the right answer”, and noticing the necessity for switching representations. This is not working nearly as well as it needs to for students to make the progress they need and that I believe they are capable of.
I hope this course will give me some new ideas to think about and try, so I’ve signed up. I’m especially interested in the ways Boaler is linking these ideas to Carol Dweck’s ideas about “mindset,” and proposing concrete ideas for helping students develop a growth mindset.
Anyone else interested?
What scientifically-honest questions can I ask my students to tangle with, based on their current ideas and expectations?
This question is at the heart of a lot of my classroom’s success and also anxiety. When I ask good questions, students are more likely to evaluate evidence thoroughly, seek contradictions, resolve those contradictions, hold each other and themselves accountable to what we know so far, and generate significant new questions for our next round of research.
A poorly-chosen question reveals itself when students don’t have enough information or skill to make sense of the information they find, or can’t think of ways to find information at all, or don’t care about the answer, or can’t see how it’s related to their goal of becoming an electronics tech.
I’m intrigued by what’s going on in this video, a clip of a TED talk featuring Bobby McFerrin (of “Don’t Worry Be Happy” fame, but also a brilliant performer of many genres). For maximum benefit, try singing along.
The question McFerrin asks himself seems to be, “what musically honest question can I ask this audience?”
The question he poses to the audience is, “What’s the next note?”
This question works because he was able to
- anticipate the ideas participants are likely to have about the topic (the pentatonic scale is surprisingly cross-cultural)
- anticipate which ideas are difficult to learn, and which ones are not (he avoids certain scale degrees and uses a tune that’s going to be structurally familiar to an American audience)
- choose a question that’s simple enough for people to make sense of using the tools they already have
- make the task interesting (and the big picture “audible”) by doing the more complicated work himself.
I’m getting much better at anticipating common initial ideas and eliciting my students’ thinking. I’m still not great at choosing the question, or choosing the right moments to suggest the question.
I’m not sure what I make of this. In the video, the participants are not exactly learning something new. They are realizing something they didn’t realize that they already knew. This doesn’t give me much insight into tackling the topics that are difficult to learn. But I keep thinking about it.
As the year winds down, I’m starting to pull out some specific ideas that I want to work on over the summer/next year. The one that presses on me the most is “readiness.” In other words,
- What is absolutely non-negotiable that my students should be able to do or understand when they graduate?
- How to I make sure they get the greatest opportunity to learn those things?
- How do I make sure no one graduates without those things? And most frustratingly,
- How do I reconcile the student-directedness of inquiry learning with the requirements of my diploma?
Some people might disagree that some of these points are worth worrying about. If you don’t teach in a trade school, these questions may be irrelevant or downright harmful. K-12 education should not be a trade school. Universities do not necessarily need to be trade schools (although arguably, the professional schools like medicine and law really are, and ought to be). However, I DO teach in a trade school, so these are the questions that matter to me.
Training that intends to help you get a job is only once kind of learning, but it is a valid and important kind of learning for those who choose it. It requires as much rigour and critical thinking as anything else, which becomes clear when we consider the faith we place in the electronics technicians who service elevators and aircraft. If my students are inadequately prepared in their basic skills, they (or someone else, or many other people) may be injured or die. Therefore, I will have no truck with the intellectual gentrification that thinks “vocational” is a dirty word. Whether students are prepared for their jobs is a question of the highest importance to me.
In that light, my questions about job-readiness have reached the point of obsession. Being a technician is to inquire. It is to search, to question, to notice inconsistencies, to distinguish between conditions that can and cannot possibly be the cause of particular faults. However, teaching my students to inquire means they must inquire. I can’t force it to happen at a particular speed (although I can cut it short, or offer fewer opportunities, etc.). At the same time, I have given my word that if they give me two years of their time, they will have skills X, Y, and Z that are required to be ready for their jobs. I haven’t found the balance yet.
I’ll probably write more about this as I try to figure it out. In the meantime, Grant Wiggins is writing about a recent study that found a dramatic difference between high-school teachers’ assessment of students’ college readiness, and college profs’ assessment of the same thing. Wiggins directs an interesting challenge to teachers: accurately assess whether students are ready for what’s next, by calibrating our judgement against the judgement of “whatever’s next.” In other words, high school teachers should be able to predict what fraction of their students are adequately prepared for college, and that number should agree reasonably well with the number given by college profs who are asked the same question. In my case, I should be able to predict how well prepared my students are for their jobs, and my assessment should match reasonably the judgement of their first employer.
In many ways I’m lucky: we have a Program Advisory Group made up of employer representatives who meet to let us know what they need. My colleagues and I have all worked between 15 and 25 years in our field. I send all my students on 5-week unpaid work terms. During and after the work terms, I meet with the student and the employer, and get a chance to calibrate my judgement. There’s no question that this is a coarse metric; the reviews are influenced by how well the student is suited to the culture of a particular employer, and their level of readiness in the telecom field might be much higher than if they worked on motor controls. Sometimes employers’ expectations are unreasonably high (like expecting electronics techs to also be mechanics). There are some things employers may or may not expect that I am adamant about (for example, that students have the confidence and skill to respond to sexist or racist comments). But overall, it’s a really useful experience.
Still, I continue to wonder about the accuracy of my judgement. I also wonder about how to open this conversation with my colleagues. It seems like something it would be useful to work on together. Or would it? The comments on Wiggins’ post are almost as interesting as the post itself.
It seems relevant that most commenters are responding to the problem of students’ preparedness for college, while Wiggins is writing about a separate problem: teachers’ unfounded level of confidence about students’ preparedness for college.
The question isn’t, “why aren’t students prepared for college.” It’s also not “are college profs’ expectations reasonable.” It’s “why are we so mistaken about what college instructors expect?”
My students, too, often miss this kind of subtle distinction. It seems that our students aren’t the only ones who suffer from difficulty with close reading (especially when stressed and overwhelmed).
Wiggins calls on teachers to be more accurate in our assessment, and to calibrate our assessment of college-readiness against actual college requirements. I think these are fair expectations. Unfortunately, assessment of students’ college-readiness (or job-readiness) is at least partly an assessment of ourselves and our teaching.
A similar problem is reported about college instructors. The study was conducted by the Foundation for Critical Thinking with both education faculty and subject-matter faculty who instruct teacher candidates. They write that many profs are certain that their students are leaving with critical thinking skills, but that most of those same profs could not clearly explain what they meant by critical thinking, or give concrete examples of how they taught it.
Self-assessment is surprisingly intractable; it can be uncomfortable and can elicit self-doubt and anxiety. My students, when I expect them to assess their work against specific criteria, exhibit all the same anger, defensiveness, and desire to change the subject as seen in the comments. Most of them literally can’t do it at first. It takes several drafts and lots of trust that they will not be “punished” for admitting to imperfection. Carol Dweck’s work on “growth mindset” comes to mind here… is our collective fear of admitting that we have room to grow a consequence of “fixed mindset”? If so, what is contributing to it? In that light, the punitive aspects of NCLB (in the US) or similar systemic teacher blaming, isolation, and lack of integrated professional development may in fact be contributing to the mis-assessment reported in the study, simply by creating lots of fear and few “sandboxes” of opportunity for development and low-risk failure. As for the question of whether education schools are providing enough access to those experiences, it’s worth taking a look at David Labaree’s “The Trouble with Ed School.”
One way to increase our resilience during self-assessment is to do it with the support of a trusted community — something many teachers don’t have. For those of us who don’t, let’s brainstorm about how we can get it, or what else might help. Inaccurate self-assessment is understandable but not something we can afford to give up trying to improve.
I’m interested in commenter I Hodge’s point about the survey questions. The reading comprehension question allowed teachers to respond that “about half,” “more than half,” or “all, or nearly all” of their students had an adequate level of reading comprehension. In contrast, the college-readiness question seems to have required a teacher to select whether their students were “well,” “very well,” “poorly,” or “very poorly” prepared. This question has no reasonable answer, even if teachers are only considering the fraction of students who actually do make it to college. I wonder why they posed those two questions so differently?
Last but not least, I was surprised that some people blamed college admissions departments for the admission of underprepared students. Maybe it’s different in the US, but my experience here in Canada is that admission is based on having graduated high school, or having gotten a particular score in certain high school courses. Whether under-prepared students got those scores because teachers under-estimated the level of preparation needed for college, or because of rigid standards or standardized tests or other systemic problems, I don’t see how colleges can fix, other than by administering an entrance test. Maybe that’s more common than I know, but neither the school at which I teach nor the well-reputed university that I (briefly) attended had one. Maybe using a high school diploma as the entrance exam for college/university puts conflicting requirements on the K-12 system? I really don’t know the answer to this.
Wiggins recommends regularly bringing together high-school and college faculty to discuss these issues. I know I’d be all for it. There is surely some skill-sharing that could go back and forth, as well as discussions of what would help students succeed in college. Are we ready for this?
Some interesting comments on my recent post about causal thinking have got my wheels turning. It puts me in mind of the conversation at Overthinking My Teaching about whether “repeated addition” is the best way to approach teaching exponents. In that post, Christopher Danielson points out the helpfulness of shifting from “Why is Approach X wrong” or even “Which approach is correct” toward “What is gained and lost when using Approach X?”
In that light, I’m thinking back on my post and the comments. For example:
I talk about the difference between “who/what you are” (the definition of you) and “what caused you” (a meeting of sperm and egg). In the systems of belief that my students tend to have, people are not thought to “just happen” or “cause themselves.” It can help open the conversation. However, even when I do this, they are surprisingly unlikely to transfer that concept to atomic particles.
“Purpose is a REAL facet in all of nature because everything has a natural function e.g., the role of mitochondria in eukaryotic cells is ATP production, or that the nature of negatively charged electrons is to attract and repel + and – charged particles respectively, etc.”
But I think it’s the same mistake to presume that they really *mean* that the electron has desires and wants, which is a slippery slope to thinking they *can’t* access or feel the need to explore the deeper causal relationships.
I’m noticing that there are ideas I expect students to extend from humans to particles (forces can act on us), and ideas I expect them to find not-extensible (desire). These examples are the easy ones; “purpose” is harder to place clearly in one category or the other, and “cause” probably belongs in both categories but means something different in each. I need to think more clearly about which ones are which and why, and how to help students develop their own skills for distinguishing.
I’m trying to stop assuming that when students talk about electrons’ “desires,” that they are referring to a deeper story; I also need to avoid assuming that they are not, or that they don’t want to/aren’t drawn to.
I’m on a personal “fast” of discussing electrons’ purposes and desires, at least while I’m in earshot of my students. It’s hard to break those habits, exactly because they are so helpful. However, it has the useful result that all the ideas about purpose and desires that are getting thrown around in class come from the students. The students seem more willing to question them than when the ideas come from me. Unfortunately they are having a really hard time understanding each other’s metaphors (even though the metaphors are not particularly far-fetched, by my reckoning), and I’m having a really hard time facilitating the conversation to help them see each other’s point of view. But that still seems better than before, when the metaphors were not getting questioned at all, and maybe not even noticed as metaphors.
Michael Pershan kicked my butt recently with a post about why teachers tend to plateau in skill after their third year, connecting it to Cal Newport’s ideas such as “hard practice” (and, I would argue, “deep work“).
Michael distinguishes between practice and hard practice, and wonders whether blogging belongs on his priority list:
“Hard practice makes you better quickly. Practice lets you, essentially, plateau. …Put it like this: do you feel like you’re a 1st year teacher when you blog? Does your brain hurt? Do you feel as if you’re lost, unsure how to proceed, confused?If not, you’re not engaged in hard practice.”
Ooof. On one hand, it made me face my desire to avoid hard practice; I feel like I’ve spent the last 8 months trying to decrease how much I feel like that. I’ve tried to create classroom procedures that are more reuseable and systematic, especially for labs, whiteboarding sessions, class discussions, and model presentations.
It’s a good idea to periodically take a hard look at that avoidance, and decide whether I’m happy with where I stand. In this case, I am. I don’t think the goal is to “feel like a first year teacher” 100% of the time; it’s not sustainable and not generative. But it reminds me that I want to know which activities make me feel like that, and consciously choose some to seek out.
Michael makes this promise to himself:
It’s time to redouble my efforts. I’m half way through my third year, and this would be a great time for me to ease into a comfortable routine of expanding my repertoire without improving my skills.
I’m going to commit to finding things that are intellectually taxing that are central to my teaching.
It made me think about what my promises are to myself.
Be a Beginner
Do something every summer that I don’t know anything about and document the process. Pay special attention to how I treat others when I am insecure, what I say to myself about my skills and abilities, and what exactly I do to fight back against the fixed-mindset that threatens to overwhelm me. Use this to develop some insight into what exactly I am asking from my students, and to expand the techniques I can share with them for dealing with it.
Last summer I floored my downstairs. The summer before that I learned to swim — you know, with an actual recognizable stroke. In both cases, I am proud of what I accomplished. In the process, I was amazed to notice how much concentration it took not to be a jerk to myself and others.
Learn More About Causal Thinking
I find myself being really sad about the ways my students think about causality. On one hand, I think my recent dissections of the topic are a prime example of “misconceptions listening” — looking for the deficit. I’m pretty sure my students have knowledge and intuition about cause that I can’t see, because I’m so focused on noticing what’s going wrong. In other words, my way of noticing students’ misconceptions is itself a misconception. I’d rather be listening to their ideas fully, doing a better job of figuring out what’s generative in their thinking.
What to do about this? If I believe that my students need to engage with their misconceptions and work through them, then that’s probably what I need too. There’s no point in my students squashing their misconceptions in favour of “right answers”; similarly, there’s no point in me squashing my sadness and replacing it with some half-hearted “correct pedagogy.”
Maybe I’m supposed to be whole-heartedly happy to “meet my students where they are,” but if I said I was, I’d be lying. (That phrase has been used so often to dismiss my anger at the educational malpractice my students have endured that I can’t even hear it without bristling). I need to midwife myself through this narrow way of thinking by engaging with it. Like my students, I expect to hold myself accountable to my observations, to good-quality reasoning, to the ontology of learning and thinking, and to whatever data and peer feedback I can get my hands on.
My students’ struggle with causality is the puzzle from which my desire for explanation emerged; it is the source of the perplexity that makes me unwilling to give up. I hope that pursuing it honestly will help me think better about what it’s like when I ask my students to do the same.
Interact with New Teachers
Talking with beginning teachers is better than almost anything else I’ve tried for forcing me to get honest about what I think and what I do. There’s a new teacher in our program, and talking things through with him has been a big help in crystallizing my thoughts (mutually useful, I think). I will continue doing this and documenting it. I also put on a seminar on peer assessment for first-year teachers last summer; it was one of the more challenging lesson plans I’ve ever written. If I have another chance to do this, I will.
Work for Systemic Change
I’m not interested in strictly personal solutions to systemic problems. I won’t have fun, or meet my potential as a teacher, if I limit myself to improving me. I want to help my institution and my community improve, and that means creating conditions and communities that foster change in collective ways. For two years, I tried to do a bit of this via my campus PD committee; for various reasons, that avenue turned out not to lead in the directions I’m interested in going. I’ve had more success pressing for awareness and implementation of the Workplace Violence Prevention regulations that are part of my local jurisdiction’s Occupational Health and Safety Act.
I’m not sure what the next project will be, but I attended an interesting seminar a few months ago about our organization’s plans for change. I was intrigued by the conversations happening about improving our internal communication. I’ve also had some interesting conversations recently with others who want to push past the “corporate diversity” model toward a less ahistorical model of social justice or cultural competence. I’ll continue to explore those to find out which ones have some potential for constructive change.
Design for Breaks
I can’t do this all the time or I won’t stay in the classroom. I know that now. As of the beginning of January, I’ve reclaimed my Saturdays. No work on Saturdays. It makes the rest of my week slightly more stressful, but it’s worth it. For the first few weeks, I spent the entire day alternately reading and napping. Knowing that I have that to look forward to reminds me that the stakes aren’t as high as they sometimes seem.
I’m also planning to go on deferred leave for four months starting next January. After that, I’ve made it a priority to find a way to work half-time. The kind of “intellectually taxing” enrichment that I need, in order for teaching to be satisfying, takes more time than is reasonable on top of a full-time job. I’m not willing to permanently sacrifice my ability to do community volunteer work, spend time with my loved ones, and get regular exercise. That’s more of a medium-term goal, but I’m working a few leads already.
Anyone have any suggestions about what I should do with 4 months of unscheduled time starting January 2014?
The past semester has been a tough slog with my first-year class. I’m slowly figuring out what resources and approaches were missing. Last year, I launched myself headfirst (and underprepared) into inquiry-based learning because most of the class members were overflowing with significant, relevant questions.
This year, the students are barely asking questions at all, and when they do, the questions are not very relevant — they don’t help us move forward toward predicting circuit behaviour, troubleshooting, or any of the other expressed goals we’ve discussed as a class. They’re mostly about electrical safety which, don’t get me wrong, is important, but talking about how people do and don’t get electrocuted has limited value in helping us understand amplifiers. I felt like I juiced those questions as much as I could, but it only led to more questions about house wiring and car chassis.
If I’m serious about inquiry-based learning, I have to develop a set of tools that allow me to adapt to the group. Right now I feel like my approach only works if the group is already fairly skills at distinguishing between what we have evidence for and what we just feel like we’ve heard before, and asking significant questions that move toward a specific goal. In other words, I wasn’t teaching them to reason scientifically, I was filtering out those who already knew from those who didn’t. Here are some of the things I need to be more prepared for.
I have never had so much trouble getting students to use their meters correctly. Here we are in second semester, and I still have students confidently using incorrect settings. I’d be happier if they were unsure, or had questions, but no, many are not noticing that they have problems with this. And I don’t mean being confused about whether you should measure 1.5V on the 20V or the 2000 mV setting… I mean measuring 0.1 Ohms on the 200 KOhm setting.
I switched this year to teaching them about current first, rather than resistance (like I did last year). I’m loathe to reconsider because current is the only one that lends itself to causal thinking and sense-making early in the year (try explaining resistance to someone who doesn’t know what current is… and “electric potential,” to someone who doesn’t know anything formal about energy or force or fields, is just hell). Could this be part of why they’re struggling so much to use their meters correctly? Is there something about the “current first” approach that bogs them down with cognitive load at a stage when they just need some repetitive practice? I’m curious to check out the CASTLE curriculum, maybe over the summer, to try to figure some of this out.
I created a circuit-recording template last fall that I thought was such a great idea… it had a checklist at the top to help the students notice if they’d forgotten anything. Guess what? They started measuring without thinking about the meaning of the measurements — measuring as if it was just something to be check off a list! No observations. No questions. No surprise at unusual or unintuitive numbers. Damn. The checklist is gone and never coming back — next year I’ll make sure we only measure things that the students have found a reason to measure.
Last term, I waited far too long to give the quiz on measurement technique. I knew they weren’t ready, and I kept thinking that if we spent more time practicing measuring (while exploring the questions we had painstakingly eked out), that it would get better. Finally, we were so far behind that I gave the quiz anyway. The entire class failed it (not a catastrophe, given the reassessment policy), and the most common comment when we reviewed the quiz was “why didn’t you tell us this before??” Uh. Right. Quiz early, quiz often.
Guess what the teacher wants
The degree of “teacher-pleasing” being attempted is disheartening. Students are almost always uncomfortable making mistakes, using the word “maybe” in situations where it is genuinely the most accurate way to express the strength of our data, or re-evaluating what they think of as “facts.” But this is unusual. There’s a high rate of students anxiously making up preposterous answers rather than saying “I don’t know.”
I tend toward a pretty aggressive questioning style — the kind of “what causes that, why does that happen” bluntness I would use with colleagues to bat ideas around. I’ve changed my verbal prompt to “what might cause that?” and “what could possibly be happening” in the hopes that it would help students discern whether they are certain or not, and also help them transition toward communicating the tentativeness of ideas for which we have little evidence. Obviously, I take care to draw out the reasoning and evidence in support of ideas, regardless of whether they’re canonical or not, and conversely make sure we discuss evidence against all of our ideas, including the “right” ones. I try to honour students’ questions by tracking them and letting them choose from among the class’s questions when deciding what to investigate next. But valuing their questions and thinking is clearly not enough.
I gave a test question last semester that asked students to evaluate some “student” reasoning. It used the word “maybe” in a completely appropriate way, and that’s what I heard outraged responses about from half the class. They thought the reasoning was poor (and also reported that it was badly written!) because of it. Again, we practiced explicitly, but sometimes I feel like I’m undermining their faith in “right answer” reasoning without helping them replace it with something better…
On the odd occasion when I ask someone a question and they say “I don’t know,” I make a point of not putting them on the spot, but of gathering info/evidence/ideas from other students for the first student to choose from, or breaking the class into small groups and asking them to discuss. I try to make sure that the person who said “I don’t know” has as few negative consequences as possible. Yet the person who says it inevitably looks crestfallen.
Talking in class
The frequency of students speaking up in class is at an all-time low. I wonder if this has been influenced by my random cold-calling — they figure I’ll call on them eventually so there’s no sense putting their hand up to make a comment or ask a question? The thing is, they don’t ask those questions when I call on them — just answer the question I ask.
At the same time, the frequency of whispered side conversations is at an all-time high, whether the speaker with the floor is me or another student. I think I’m unusually sensitive to this — I find it completely distracting, and can barely maintain my train of thought if students are whispering to each other. Maybe that’s partly my hearing, which is fairly acute — I can actually hear their whole conversation, even if they’re whispering at the back of the room (keep in mind that there are only 17 people and the room is pretty small). So my standard response to this is one warning during class (followed by a quiet, private conversation after class) — if it happens again, they’re leaving the room. Is this part of why they’re afraid to talk out loud — because I crack down on the talking under their breath? I’m open to other ways of responding but out of ideas at the moment.
Even the strongest students are still having trouble explaining causes of physical effects. They know I won’t accept a formula as a cause, but they can’t explain why, and when I ask someone to explain a cause, they will consistently give a formula anyway (figuring that an answer is always better than no answer, I guess). Next approaches: asking them to write down the cause, discuss in groups
As Jason articulates clearly, I think that my students need more help motivating and strengthening their scientific discourse. He summarizes a promising-sounding approach called Guided Reciprocal Questioning as follows:
- Learn about something.
- Provide generic question frames.
- Students generate questions individually.
- Students discuss the questions in their groups.
- Share out.
I do something similar to #1-3, but I’m ready to try #4-5, with appropriate “discussion frames”, to see if I can help the students hold each other accountable to their knowledge. Right now, they barely propose questions or answers, but when they do, the class seems to accept it, even if it contradicts something else we just talked about.
Also, Janet Abercrombie wrote recently in the comments about a Question Formulation Technique that I’d like to look into some more.
Conclusion: It works anyway
The whole experience was kind of heart-breaking. But the conversations with students kept convincing me that I had to do it anyway. I don’t know how many students took the time to say to me, “whoa, it seems like you actually want us to understand this stuff.” The look of astonishment really said it all. The bottom line is, this group is a much better test of the robustness of my methods than last year’s group could be.