You are currently browsing the category archive for the ‘Critical Thinking’ category.
As the year winds down, I’m starting to pull out some specific ideas that I want to work on over the summer/next year. The one that presses on me the most is “readiness.” In other words,
- What is absolutely non-negotiable that my students should be able to do or understand when they graduate?
- How to I make sure they get the greatest opportunity to learn those things?
- How do I make sure no one graduates without those things? And most frustratingly,
- How do I reconcile the student-directedness of inquiry learning with the requirements of my diploma?
Some people might disagree that some of these points are worth worrying about. If you don’t teach in a trade school, these questions may be irrelevant or downright harmful. K-12 education should not be a trade school. Universities do not necessarily need to be trade schools (although arguably, the professional schools like medicine and law really are, and ought to be). However, I DO teach in a trade school, so these are the questions that matter to me.
Training that intends to help you get a job is only once kind of learning, but it is a valid and important kind of learning for those who choose it. It requires as much rigour and critical thinking as anything else, which becomes clear when we consider the faith we place in the electronics technicians who service elevators and aircraft. If my students are inadequately prepared in their basic skills, they (or someone else, or many other people) may be injured or die. Therefore, I will have no truck with the intellectual gentrification that thinks “vocational” is a dirty word. Whether students are prepared for their jobs is a question of the highest importance to me.
In that light, my questions about job-readiness have reached the point of obsession. Being a technician is to inquire. It is to search, to question, to notice inconsistencies, to distinguish between conditions that can and cannot possibly be the cause of particular faults. However, teaching my students to inquire means they must inquire. I can’t force it to happen at a particular speed (although I can cut it short, or offer fewer opportunities, etc.). At the same time, I have given my word that if they give me two years of their time, they will have skills X, Y, and Z that are required to be ready for their jobs. I haven’t found the balance yet.
I’ll probably write more about this as I try to figure it out. In the meantime, Grant Wiggins is writing about a recent study that found a dramatic difference between high-school teachers’ assessment of students’ college readiness, and college profs’ assessment of the same thing. Wiggins directs an interesting challenge to teachers: accurately assess whether students are ready for what’s next, by calibrating our judgement against the judgement of “whatever’s next.” In other words, high school teachers should be able to predict what fraction of their students are adequately prepared for college, and that number should agree reasonably well with the number given by college profs who are asked the same question. In my case, I should be able to predict how well prepared my students are for their jobs, and my assessment should match reasonably the judgement of their first employer.
In many ways I’m lucky: we have a Program Advisory Group made up of employer representatives who meet to let us know what they need. My colleagues and I have all worked between 15 and 25 years in our field. I send all my students on 5-week unpaid work terms. During and after the work terms, I meet with the student and the employer, and get a chance to calibrate my judgement. There’s no question that this is a coarse metric; the reviews are influenced by how well the student is suited to the culture of a particular employer, and their level of readiness in the telecom field might be much higher than if they worked on motor controls. Sometimes employers’ expectations are unreasonably high (like expecting electronics techs to also be mechanics). There are some things employers may or may not expect that I am adamant about (for example, that students have the confidence and skill to respond to sexist or racist comments). But overall, it’s a really useful experience.
Still, I continue to wonder about the accuracy of my judgement. I also wonder about how to open this conversation with my colleagues. It seems like something it would be useful to work on together. Or would it? The comments on Wiggins’ post are almost as interesting as the post itself.
It seems relevant that most commenters are responding to the problem of students’ preparedness for college, while Wiggins is writing about a separate problem: teachers’ unfounded level of confidence about students’ preparedness for college.
The question isn’t, “why aren’t students prepared for college.” It’s also not “are college profs’ expectations reasonable.” It’s “why are we so mistaken about what college instructors expect?“
My students, too, often miss this kind of subtle distinction. It seems that our students aren’t the only ones who suffer from difficulty with close reading (especially when stressed and overwhelmed).
Wiggins calls on teachers to be more accurate in our assessment, and to calibrate our assessment of college-readiness against actual college requirements. I think these are fair expectations. Unfortunately, assessment of students’ college-readiness (or job-readiness) is at least partly an assessment of ourselves and our teaching.
A similar problem is reported about college instructors. The study was conducted by the Foundation for Critical Thinking with both education faculty and subject-matter faculty who instruct teacher candidates. They write that many profs are certain that their students are leaving with critical thinking skills, but that most of those same profs could not clearly explain what they meant by critical thinking, or give concrete examples of how they taught it.
Self-assessment is surprisingly intractable; it can be uncomfortable and can elicit self-doubt and anxiety. My students, when I expect them to assess their work against specific criteria, exhibit all the same anger, defensiveness, and desire to change the subject as seen in the comments. Most of them literally can’t do it at first. It takes several drafts and lots of trust that they will not be “punished” for admitting to imperfection. Carol Dweck’s work on “growth mindset” comes to mind here… is our collective fear of admitting that we have room to grow a consequence of “fixed mindset”? If so, what is contributing to it? In that light, the punitive aspects of NCLB (in the US) or similar systemic teacher blaming, isolation, and lack of integrated professional development may in fact be contributing to the mis-assessment reported in the study, simply by creating lots of fear and few “sandboxes” of opportunity for development and low-risk failure. As for the question of whether education schools are providing enough access to those experiences, it’s worth taking a look at David Labaree’s “The Trouble with Ed School.”
One way to increase our resilience during self-assessment is to do it with the support of a trusted community — something many teachers don’t have. For those of us who don’t, let’s brainstorm about how we can get it, or what else might help. Inaccurate self-assessment is understandable but not something we can afford to give up trying to improve.
I’m interested in commenter I Hodge’s point about the survey questions. The reading comprehension question allowed teachers to respond that “about half,” “more than half,” or “all, or nearly all” of their students had an adequate level of reading comprehension. In contrast, the college-readiness question seems to have required a teacher to select whether their students were “well,” “very well,” “poorly,” or “very poorly” prepared. This question has no reasonable answer, even if teachers are only considering the fraction of students who actually do make it to college. I wonder why they posed those two questions so differently?
Last but not least, I was surprised that some people blamed college admissions departments for the admission of underprepared students. Maybe it’s different in the US, but my experience here in Canada is that admission is based on having graduated high school, or having gotten a particular score in certain high school courses. Whether under-prepared students got those scores because teachers under-estimated the level of preparation needed for college, or because of rigid standards or standardized tests or other systemic problems, I don’t see how colleges can fix, other than by administering an entrance test. Maybe that’s more common than I know, but neither the school at which I teach nor the well-reputed university that I (briefly) attended had one. Maybe using a high school diploma as the entrance exam for college/university puts conflicting requirements on the K-12 system? I really don’t know the answer to this.
Wiggins recommends regularly bringing together high-school and college faculty to discuss these issues. I know I’d be all for it. There is surely some skill-sharing that could go back and forth, as well as discussions of what would help students succeed in college. Are we ready for this?
When we start investigating a new topic or component, I often ask students to make inferences or ask questions by applying our existing model to the new idea. For example, after introducing an inductor as a length of coiled wire and taking some measurements, I expect students to infer that the inductor has very little voltage across it because wires typically have low resistance. However, for every new topic, some students will assume that their current knowledge doesn’t relate to the new idea at all. Although the model is full of ideas about voltage and current and resistance and wires, “the model doesn’t have anything in it about inductors.”
There are a few catchphrases that damage my calm, and this is one of them. I was discussing it with my partner’s daughter, who’s a senior in high school, and often able to provide insight into my students’ thinking. I was complaining that students seem to treat the model (of circuit behaviour knowledge we’ve acquired so far) like their baby, fiercely defending it against all “threats,” and that I was trying to convince them to have some distance, to allow for the possibility that we might have to change the model based on new information, and not to take it so personally. She had a better idea: that they should indeed continue to treat the model like a baby — a baby who will grow and change and isn’t achieving its maximum potential with helicopter parents hovering around preventing it from trying anything new.
The next time I heard the offending phrase, I was ready with “How do you expect a baby model to grow up into a big strong model, unless you feed it lots of nutritious new experiences?“
It worked. The students laughed and relaxed a bit. They also started extending their existing knowledge. And I relaxed too — secure in the knowledge that I was ready for the next opportunity to talk about “growth mindset for the model.”
My students use the same assessment rubric for practically every new source of information we encounter, whether it’s something they read in a book, data they collected, or information I present directly. It asks them to summarize, relate to their experience, ask questions, explain what the author claims is the cause, and give support using existing ideas from the model. The current version looks like this (click through to zoom or download):
Assessment for Learning
There are two goals:
- to assess the author’s reasoning, and help us decide whether to accept their proposal
- to assess one’s own understanding
If you can’t fill it in, you probably didn’t understand it. Maybe you weren’t reading carefully, maybe it’s so poorly reasoned or written that it’s not actually understandable, or maybe you don’t have the background knowledge to digest it. All of these conditions are important to flag, and this tool helps us do that.
The title says “Rubric for Assessing Reasoning,” but we just call them “feedbacks.”
Recently, there have been a spate of feedbacks turned in with the cause and/or the “support from the model” section left blank or filled with vague truisms (“this is supported by lots of ideas about atoms,” or “I’m looking forward to learning more about what causes this.”)
I knew the students could do better — all of them have written strong statements about cause in the past (in chains of cause and effect 2-5 steps long). I also allow students to write a question about cause, instead of a statement, if they can’t tell what the cause is, or if they think the author hasn’t included it.
So today, after I presented my second draft of some information about RMS measurements, I showed some typical examples of causal statements and supporting ideas. I asked students to rate them according to their significance to the question at hand, then had some small group discussions. I was interested (and occasionally surprised) by their criteria for what makes a good statement of cause, and what makes a good supporting idea. Here’s the handout I used to scaffold the discussions.
The students’ results:
A statement of cause should …
- Be relevant to the question
- Help us understand the question or the answer
- Not leave questions unanswered
- Give lots of info
- Relate to the model
- Explain what physically makes something happen or asks a question that would help you understand the physical cause
- Help you distinguish between similar things (like the difference between Vpk, Vpp, Vrms)
- Not beg the question (not state the same thing twice using different words)
- Be concrete
- Make the new ideas easier to accept
- Use definitions
Well, I was looking for an excuse to talk about definitions — I think this is it!
Supporting ideas from the model should…
- Help clarify how the electrons work
- Help answer or clarify the question
- Directly involve information to help relate ideas
- Help us see what is going on
- Give us reasoning so we can in turn have an explanation
- Clarify misunderstandings
- Allow you to generalize
- Support the cause, specifically.
- Be specific to the topic, not broad (like, “atoms are made of protons, electrons, and neutrons.”)
- Not use a formula
- It helps if you understand what’s going on, it makes it easier to find connections
The Last World
Which ones would you emphasize? What would you add?
How can I help students make causal thinking a habit? I’ve written before about my struggles helping students “do cause” consistently, and distinguishing between “what made it happen” vs. “what made me think it would happen.” Most recently, I wrote about how using a biological model of the growing brain might help develop the skills needed to talk about a physical model of atomic particles.
Sweng1948 commented that cause and definition become easy to distinguish when we talk about pregnancy, and seemed a little concerned that it would come off as flippant. To me, it doesn’t — especially because I use that example all the time. Specifically, I talk about the difference between “who/what you are” (the definition of you) and “what caused you” (a meeting of sperm and egg). In the systems of belief that my students tend to have, people are not thought to “just happen” or “cause themselves.” It can help open the conversation. However, even when I do this, they are surprisingly unlikely to transfer that concept to atomic particles.
Biology Vs. Physics
My students seem to regard cause differently in biology vs. physics. They are likely to say that eating poorly causes malnutrition and eating well contributes to causing good health; they are less likely to say that the negative charge of electrons causes them to move apart, and more likely to say that electrons move apart because they’re electrons, and that’s what electrons do.
Further, once they conclude that moving two electrons apart causes their repulsion to weaken, they are unable to decide whether moving them closer together strengthens it (I have no idea what to do about this). It’s also often opaque to students whether one electron is repelling the other, or the second one is repelling the first. This happens in various contexts: the other day, a student presented the idea that cooling a battery would lower its voltage. Several students were frustrated because they had asked what would raise a battery’s voltage, not what would lower it, and were a bit aggressive in telling the presenting student that he had not answered their question.
That’s one of the reasons I was interested in using this “brain” model as a way to open the conversation about causality and models in general; they do cause better with biology. I’ll have to figure out next year how to build a bridge between cells and atoms…
I’m not sure why it’s so difficult. Here are a few stabs at it:
- Is it because they see causality as connected to intention — in other words, you are only causing things if you do them on purpose?
- Does their experience of their own conscious agency helps them see how their choices are causes that have demonstrable effects — such that things that don’t have choices also seem not to cause things?
- Is it because living things are easier to see and relate to than electrons?
- Is it because they see cause as inextricably linked to desire? Something like, “What caused me to buy a bag of candy is that I wanted it. So, electrons must move because they want to.”
I sometimes fool myself into thinking that my students have understood some underlying principle when they anthropomorphize particles and forces: “The electron wants to move toward the proton.” “Voltage is when an electron is ready to move to another atom.” I assume that they are constructing a metaphor to symbolize what’s going on, or using a verbal shorthand. Then I realize, many students don’t think of the electron’s “desire” as a metaphor, and can’t connect this to ideas about energy, charge, etc. Consider this my plea to K-12 teachers not to say that stuff, and when students bring it up, to engage with them about what exactly that means. Desires are things we can use willpower to sublimate. Forces, not so much. That’s why it’s called force.
Something about cause leads to students treating particles (and, for that matter, compilers and microprocessors) as if they, like people, might act the way we expect, but they also might not. I can’t tell whether it’s because there could be an opposing force, or “just because.” If it’s the former, then there’s a kernel of intellectual humility here that I respect: a sort of submission to the possibility that there are forces we don’t understand, and our model will only work if there are no opposing forces we haven’t accounted for. However, I often can’t find out whether they’re talking/thinking about science or faith, because the responses to my questions are often defensive, along the lines of “My physics teacher said it’s complicated. The reason they didn’t teach it to us in high school is that it’s just too hard for anyone to learn, unless they’re a theoretical physicist.” (*sigh*. Hoping the growth-mindset ideas will help with this).
We Can’t Understand It Fully, So There’s No Point
Also, the “we don’t understand it fully” shrug seems to be anti-generative: it leads to an intellectual abdication. It’s a defence against the idea that we should just go ahead and use our model to make predictions, then test the predictions to find the holes in the model. Or maybe I’ve got it backwards — maybe the intellectual abdication causes the shrug. I’m back to growth mindset again, but not about growing ourselves — growth mindset for the model too! Fixed mindset says there’s no point making a prediction that might be wrong. Only a growth mindset sees the value in testing a prediction with the intention of helping the model (and ourselves) get stronger.
I expect that the word “potential” is part of the problem here (as in, potential difference and potential energy) — to my students, “potential” means something that you need to make a decision about. They say that they will “potentially” go to the movies that night, which means they haven’t chosen yet. By that logic, if you have a “potential difference”, that means there might be a difference, but there might not, too. Depending on what the electron decides. Potential energy? Maybe you’ve got (or will later have) energy, maybe you don’t. What’s strong about this thinking is that they’re right that there’s something that “might or might not” happen (current, acceleration, etc.). What’s frustrating is that I don’t know how to help them unpack the difference between a “force” and a “decision” in a way that actually helps.
(And no, the connections to the uncertainty principle, the observer effect, the unpredictability of chaotic systems, and the challenges to causality posed by modern physics are not lost on me… but I’d rather my students work through “wrong” conclusions via confidence in reasoning, than come to some shadow of the “right” conclusions via an assumption of their own intellectual inadequacy.)
I read a blog post recently about the use of smartphones in the classroom, and it was thought-provoking enough to make me want to flesh out some ideas. I submitted them as a comment two weeks ago, but they didn’t appear on the blog. My inquiry about whether the comment was rejected or simply lost in the ether also went unacknowledged, so I thought I’d post it here.
Smartphones Work Well In My Classroom For…
I really appreciate when students take photos of the board, so they can pay attention and join the conversation instead of copying what I’m writing. A document-scanning app (e.g. CamScanner) can correct parallax and improve contrast, making it look like you own a scanner the size of the whiteboard.
If students are working on team-sized interactive whiteboards, it can also be a great way to capture what they’ve come up with as a group, instead of having to re-copy it into their notebook.
Tablets are extra-useful for this since the larger screen makes it easier to read and annotate the photos — especially useful are EzPDF and Freenote, although obviously cross-platform support can be an issue.
I also like having students take videos of themselves solving problems or demonstrating experiments — a big help when I don’t have time to see each person or group “live.” Plus, hearing their voices as they describe their thinking gives me a better feel for what they’ve understood vs. what they’ve memorized.
The interesting thing is that many of my students, contrary to the received wisdom about digital natives, are surprisingly reticent about this. It takes a significant amount of direct instruction for students to try these approaches, even when it seems to me that it would be a huge time-saver. If I give an online and a conventional option for an assignment, the students overwhelmingly choose the conventional route (using a paper notebook instead of a blog so that their essay research is searchable… or submitting written assignments instead of screencasts… or typing instead of using speech to text for dictating papers, for example — even Windows 7 has native support that is reasonably good).
My students, for various reasons, don’t have much time for adjusting or troubleshooting their devices (figuring out where the camera stores its pictures so that the pics can be attached to an email, for example) and often do not understand that folders are hierarchical.
But I Can Drive Without Understanding Engines, Right?
The good news is, teachers who fear that their students far outpace them in skill probably have less to fear than they think. The bad news is, I suspect that we (including the students) tend to overestimate the degree to which using technology (as opposed to understanding it, or directing it) is inherently useful.
It’s a bit like knowing how to drive a car but not understanding that pressing on the accelerator is what uses up gas and increases the braking distance. You can make the car go fast, but you probably can’t figure out whether going fast is a good idea at the moment. Maybe you follow the speed limit diligently without being able to judge whether it’s prudent under the conditions; maybe you don’t follow the speed limit because you don’t know of any reason for its importance. Besides being dangerous, both approaches are unthinking — abdicating responsibility to either the rule-makers or other drivers.
Making Vs. Using
One approach that seems to be having a lot of success is systematically teaching students to become makers and fixers of classroom technology instead of users/consumers. I’m also excited about making programming accessible to kids. Besides improving conceptual understanding and critical thinking, this approach can help us broach the idea that it’s not good enough to be a “native” of a society in which someone else holds the reins of power. My question to them is not whether they are “digital natives” but whether they are “digital serfs.” In other words, time to start paying attention to who are the programmers, and who are the programmed.
I went looking for a resource about “growth mindset” that I could use in class, because I am trying to convince my students that asking questions helps you get smarter (i.e. understand things better). I appreciate Carol Dweck‘s work on her website and her book, but I don’t find them
- concise enough,
- clear enough, or
- at an appropriate reading level for my students.
What I found was Diana Hestwood and Linda Russel’s presentation about “How Your Brain Learns and Remembers.” The authors give permission for non-profit use by individual teachers. It’s not perfect (I edited out the heading that says “You are naturally smart” … apologies to the authors) and it’s not completely in tune with some of the neuroscience research I am hearing about lately, but it meets my criteria (above) and got the students thinking and talking.
Despite her warning that it’s not intended to stand on its own and that the teacher should lead a discussion, I’d rather poke my eyes out than stand in front of the group while reading full paragraphs off of slides. I found the full-sentence, full-paragraph “presentation” to work on its own just fine (CLARIFIED: I removed all the slides with yellow backgrounds, and ended at slide 48). I printed it, gave it to the students, and asked them to turn in their responses to the questions embedded in it. I’ll report back to them with some conversational feedback on their individual papers and some class time for people to raise their issues and questions — as usual, discussion after the students have tangled with the ideas a bit.
The students really went for it. They turned in answers that were in their own words (a tough ask for this group) and full of inferences, as well as some personal revelations about their own (good and bad) learning experiences. There were few questions (the presentation isn’t exactly intended to elicit them) but lots of positive buzz. About half the class stayed late, into coffee break, so they could keep writing about their opinions of this way of thinking. Several told me that “this was actually interesting!” (*laugh*) I also got one “I’m going to show this to my girlfriend” and one, not-quite-accusatory but clearly upset “I wish someone had told me this a long time ago.” (*gulp*)
I found a lot to like in this presentation. It’s a non-threatening presentation of some material that could easily become heavily technical and intimidating. It’s short, and it’s got some humour. It’s got TONS of points of comparison for circuits, electronic signal theory, even semiconductors (not a co-incidence, obviously). Most importantly, it allows students to quickly develop causal thinking (e.g. practice causes synapses to widen).
Last year I found out in February that my students couldn’t consistently distinguish between a cause and a definition, and trying to promote that distinction while they were overloaded with circuit theory was just too much. So this year I created a unit called “Thinking Like a Technician,” in which I introduced the thinking skills we would use in the context of everyday examples. Here’s the skill sheet — use the “full screen” button for a bigger and/or downloadable version.
It helped a bit, but meant that we spend a couple of weeks talking about roller coasters, cars, and musical instruments. Next year, this is what we’ll use instead. It’ll give us some shared vocabulary for talking about learning and improving — including why things that feel “easy” don’t always help, why things that feel “confusing” don’t mean you’re stupid, why “feeling” like you know it isn’t a good test of whether you can do it, and why I don’t accept “reviewing your notes” as one of the things you did to improve when you applied for reassessment.
But this will also give us a rich example of what a “model” is, why they are necessarily incomplete and at least a bit abstracted, and how they can help us make judgement calls. Last year, I started talking about the “human brain model” around this time of the year (during a discussion of why “I’ll just remember the due date for that assignment” is not a strong inference). That was the earliest I felt I could use the word “model” and have them know what I meant — they were familiar enough with the “circuits and electrons model” to understand what a model was and what it was for. Next year I hope to use this tool to do it the other way around.
The first-year students are shocked that we accept all these ideas about electrons just because the sources support each other, even though no one’s seen an electron, and even scientists aren’t completely sure what’s going on. They’ve been asking a lot of questions about “how can we ever be sure of anything?” We’ve talked a lot about the difference between accepting an idea based on evidence and believing it on faith, how to judge the quality of sources, etc. They’ve been practicing asking clarifying questions, summarizing each others’ ideas, and identifying cause and effect. In that vein, a student came into my office the other day to tell me this interesting story…
I lead an alliance of players in [online game] and the other day I couldn’t log in. I checked all the computers at school too, and they did the same thing. So then I called tech support for [internet carrier], they said it’s not them. So I asked, “Well, how is it not you??” They eventually said that GoDaddy hosts [game server], and GoDaddy’s servers were down. So then I tried to call GoDaddy, because I want to post something on facebook but not until I checked my sources. And I was like, ‘it’s just like school, whoa.’ I tried to explain it to my boyfriend but he said ‘I think you’re @#$%ed.’
She laughed in delight.
Last February, I had a conversation with my first-year students that changed me.
On quizzes, I had been asking questions about what physically caused this or that. The responses had a weird random quality that I couldn’t figure out. On a hunch, I drew a four-column table on the board, like this:
I gave the students 15 minutes to write whatever they could think of.
I collect the answer for “cause” a write them all down. Nine out of ten students said that a difference of electrical energy levels causes voltage. This is roughly like saying that car crashes are caused by automobiles colliding.
Me: Hm. Folks, that’s what I would consider a “definition.” Voltage is just a fancy word that means “difference of electrical energy levels” — it’s like saying the same thing twice. Since they’re the same idea, one can’t cause the other — it’s like saying that voltage causes itself.
Student: so what causes voltage — is it current times resistance?
Me: No, formulas don’t cause things to happen. They might tell you some information about cause, and they might not, depending on the formula, but think about it this way. Before Mr. Ohm developed that formula, did voltage not exist? Clearly, nature doesn’t wait around for someone to invent the formula. Things in nature somehow happen whether we calculate them or not. One thing that can cause voltage is the chemical reaction inside a battery.
Other student: Oh! So, that means voltage causes current!
Me: Yes, that’s an example of a physical cause. [Trying not to hyperventilate. Remember, it's FEBRUARY. We theoretically learned this in September.]
Me: So, who thinks they were able to write a definition?
Students: [explode is a storm of expostulation. Excerpts include] “Are you kidding?” “That’s impossible.” “I’d have to write a book!” “That would take forever!”
Me: [mouth agape] What do you mean? Definitions are short little things, like in dictionaries. [Grim realization dawns.] You use dictionaries, right?
Students: [some shake heads, some just give blank looks]
Me: Oh god. Ok. Um. Why do you say it would take forever?
Student: How could I write everything about voltage? I’d have to write for years.
Me: Oh. Ok. A definition isn’t a complete story of everything humanly known about a topic. A definition is… Oh jeez. Now I have to define definition. [racking brain, settling on "necessary and sufficient condition," now needing to find a way to say that without using those words.] Ok, let’s work with this for now: A definition is when you can say, “Voltage means ________; Whenever you have ___________, that means you have voltage.”
Students: [furrowed brows, looking amazed]
Me: So, let’s test that idea from earlier. Does voltage mean a difference in electrical energy levels? [Students nod] Ok, whenever you have a difference in electrical energy levels, does that mean there is voltage? [Students nod] Ok, then that’s a definition.
Third student: So, you flop it back on itself and see if it’s still true?
Me: Yep. ["Flopping it back on itself" is still what I call this process in class.] By the way, the giant pile of things you know about voltage, that could maybe go in the “characteristics” column. That column could go on for a very long time. But cause and definition should be really short, probably a sentence.
Students: [Silent, looking stunned]
Me: I think that’s enough for today. I need to go get drunk.
Ok, I didn’t say that last part.
When I realized that my students had lumped a bunch of not-very-compatible things together under “cause,” other things started to make sense. I’ve often had strange conversations with students about troubleshooting — lots of frustration and misunderstanding on both sides. The fundamental question of troubleshooting is “what could cause that,” so if their concept of cause is fuzzy, the process must seem magical.
I also realized that my students did not consistently distinguish between “what made you think that” and “what made that happen.” Both are questions about cause — one about the cause of our thinking or conclusions, and one about the physical cause of phenomena.
Finally, it made me think about the times when I hear people talk as though things have emotions and free will — especially high-tech products like computers are accused of “having a bad day” or “refusing to work.” Obviously people say things like that as a joke, but it’s got me thinking, how often do my students act as though they actually think that inanimate objects make choices? I need a name for this — it’s not magical thinking because my students are not acting as though “holding your tongue the right way” causes voltage. They are, instead, acting as though voltage causes itself. It seems like an ill-considered or unconscious kind of animism. I don’t want to insult thoughtful and intentional animistic traditions by lumping them in together, but I don’t know what else to call it.
Needless to say, this year I explicitly taught the class what I meant by “physical cause” at the beginning of the year. I added a metacognition unit to the DC Circuits course called “Technical Thinking” (a close relative of the “technical reading” I proposed over a year ago, which I gradually realized I wanted students to do whether they were reading, listening, watching, or brushing their teeth). Coming soon.
How I got my students to read the text before class: have them do their reading during class.
Then, the next day, I can lead a discussion among a group of people who have all tangled with the text.
It’s not transformative educational design, but it’s an improvement, with these advantages:
- It dramatically reduces the amount of time I spend lecturing (a.k.a. reading the students the textbook), so there’s no net gain or loss of class time.
- The students are filling in the standard comprehension constructor that I use for everything — assessing the author’s reasoning on a rubric. That means they know exactly what sense-making I am asking them to engage in, and what the purpose of their reading is.
- When they finish reading, they hand in the assessments to me, I read them, and prepare to answer their questions for next class. That means I’m answering the exact questions they’re wondering about — not the questions they’ve already figured out or haven’t noticed yet.
- Knowing that I will address their questions provides an incentive to actually ask them. It’s not good enough to care what they think if I don’t put it into action in a way that’s actually convincing to my audience.
- Even in a classroom of 20 people, each person gets an individualized pace.
- I am free to walk around answering questions, questioning answers, and supporting those who are struggling.
- We’re using a remarkable technology that allows students to think at their own pace, pause as often/long as they like, rewind and repeat something as many times as they like, and (unlike videos or podcasts) remains intelligible even when skipping forward or going in slow-mo. This amazing technology even detects when your eyes stray from it, and immediately stops sending words to your brain until your attention returns. Its battery life is beyond compare, it boots instantly, weights less than an iPod nano, can be easily annotated (even supports multi-touch), and with the right software, can be converted from visual to auditory mode…
It’s a little bit JITT and a little bit “flipped-classroom” but without the “outside of class” part.
I often give a combination of reading materials: the original textbook source, maybe another tertiary source for comparison — e.g. a Wikipedia excerpt, then my summary and interpretation of the sources, and the inferences that I think follow from the sources. It’s pretty similar to what I would say if I was lecturing. I write the summaries in an informal tone intended to start a conversation. Here’s an example:
And here’s the kind of feedback my students write to me (you’ll see my comments back to them in there too).
Highlights of student feedback:
Noticing connections to earlier learning
When I read about finite bandwidth, it seemed like something I should have already noticed — that amps have a limit to their bandwidth and it’s not infinite
When vout tries to drop, less opposing voltage is fed back to the inverting input, therefore v2 increases and compensates for the decrease in Avol
Noticing confusion or contradiction
What do f2(OL) and Av(OL) stand for?
I’m still not sure what slew-induced distortion is.
I don’t know how to make sense of the f2 = funity/Av(CL). Is f2 the bandwidth?
In [other instructor]‘s course, we built an audio monitor, and we used an op amp. We used a somewhat low frequency (1 KHz), and we still got a gain of 22.2 If I use the equation, the bandwidth would be 45Hz? Does this mean I can only go from 955 Hz to 1045 Hz to get a gain of 22.2?
Asking for greater precision
What is the capacitance of the internal capacitor?
Is this a “flipped classroom”?
One point that stuck with me about many “flipped classroom” conversations is designing the process so that student do the low-cognitive-load activities when they’re home or alone (watching videos, listening to podcasts) and the high-cognitive-load activities when they’re in class, surrounded by supportive peers and an experienced instructor.
This seems like a logical argument. The trouble is that reading technical material is a high-cognitive-load activity for most of my students. Listening to technical material is just as high-demand… with the disadvantage that if I speak it, it will be at the wrong pace for probably everyone. The feedback above is a giant improvement over the results I got two years ago, when second year students who read the textbook would claim to be “confused” by “all of it,” or at best would pick out from the text a few bits of trivia while ignoring the most significant ideas.
The conclusion follows: have them read it in class, where I can support them.
Today we were brainstorming ideas about electricity, practicing clarifying, and creating questions that start “What causes…”. Some students are anxious about this, seem to fear that if they ask those questions, they will have to answer them. My goal is for us to draw the boundaries of what we do and don’t know — not to get lost in some metaphysical endless loop. Facing the giant pile of what we don’t know is hard sometimes.
I say “If it seems like we’ll never run out of questions, don’t worry. We don’t have to answer those questions — we’re just keeping track of what we have and haven’t answered. And anyway, if we ran out of questions, wouldn’t that be awful and boring?”
The answer from the back of the room is, “No, that would be great. And then I’d be smart.”
What’s my next move?