XKCD Comic: The erratic feedback from a randomly-varying wireless signal can make you crazy.I’m thinking about how to make assessments even lower stakes, especially quizzes.  Currently, any quiz can be re-attempted at any point in the semester, with no penalty in marks.  For a student who’s doing it for the second time, I require them to correct their quiz (if it was a quiz) and complete two practise problems, in order to apply for reassessment. (FYI, it can also be submitted in any alternate format that demonstrates mastery, in lieu of a quiz, but students rarely choose that option).

The upside of requiring practise problems is eliminating the brute-force approach where students just keep randomly trying quizzes thinking they will eventually show mastery (this doesn’t work, but it wastes a lot of time).  It also introduces some self-assessment into the process.  We practise how to write good-quality feedback, including trying to figure out what caused them to make the mistake.

The downside is that the workload in our program is really unreasonable (dear employers of electronics technicians, if you are reading this, most  hard-working beginners cannot go from zero to meeting your standards in two years.  Please contact me to discuss).  So, students are really upset about having to do two practise problems.  I try to sell it as “customized homework” — since I no longer assign homework practise problems, they are effectively exempting themselves from any part of the “homework” in areas where they have already demonstrated proficiency.  The students don’t buy it though.  They put huge pressure on themselves to get things right the first time, so they won’t have to do any practise.  That, of course, sours our classroom culture and makes it harder for them to think well.

I’m considering a couple of options.  One is, when they write a quiz, to ask them whether they are submitting it to be evaluated or just for feedback.  Again, it promotes self-assessment: am I ready?  Am I confident?  Is this what mastery looks and feels like?

If they’re submitting for feedback, I won’t enter it into the gradebook, and they don’t have to submit practise problems when they try it next (but if they didn’t succeed that time, it would be back to practising).

Another option is simply to chuck the practise problem requirement.  I could ask for a corrected quiz and good quality diagnostic feedback (written by themselves to themselves) instead.  It would be a shame, the practise really does benefit them, but I’m wondering if it’s worth it.

All suggestions welcome!

I heart zero

Here are some conversations that come up every year.

1. Zero Current

Student: “I tried to measure current, but I couldn’t get a reading.”

Me: “So the display was blank?”

Student: “No, it just didn’t show anything.”

(Note: Display showed 0.00)

2. Zero Resistance

Student: “We can’t solve this problem, because an insulator has no resistance.”

Me: “So it has zero ohms?”

Student: “No, it’s too high to measure.”

3. Zero Resistance, In a Different Way

Student: “In this circuit, X = 10, but we write R = 0 because the real ohms are unknown.”

(Note: The real ohms are not unknown.  The students made capacitors out of household materials last week, so they have previously explored that the plates have approx. 0 and the dielectric is considered open)

4. Zero Resistance Yet Another Way

Student: “I wrote zero ohms in my table for the resistance of the battery since there’s no way to measure it.”

What I Wonder

  • Are students thinking about zero as indicator that means “error” or “you’re using the measuring tool wrong?”  A bathroom scale might show zero if you weren’t standing on it.  A gas gauge shows zero when the car isn’t running.
  • When students say “it has none” like in example 2, what is it that there is none of? They might mean “it has no known value”, which might be true, as a opposed to “it has no resistance.”
  • Is this related to a need for more concreteness?  For example, would it help if we looked up the actual resistance of common types of insulation, or measured it with a megger?  That way we’d have a number to refer to.
  • #3 really stumps me. Is this a way of using “unknown” because they’re thinking of the dielectric as an insulator that is considered “open”, so that #3 is just a special case of #2?  Or is it unknown because the plates are considered to have 0 resistance and the dielectric is considered open, so we “don’t know” the resistance because it’s both at the same time?  The particular student who said that one finds it especially hard to express his reasoning and so couldn’t elaborate when I tried to find out where he was coming from.
  • Why does this come up so often for resistance, and sometimes for current, but I can’t think of a single example for voltage?  I suspect it’s because both resistance and current feel concrete and like real phenomena that they could visualize, so they’re more able to experiment with its meaning.  I think they’re avoiding voltage altogether (first off, it’s about energy, which is weird in the first place, and then it’s a difference of energies, which makes it even less concrete because it’s not really the amount of anything — just the difference between two amounts, and then on top of that we never get to find out what the actual energies are, only the difference between them — which makes it even more abstract and hard to think about).
  • Since this comes up over and over about measurement, is it related to seeing the meter as an opaque, incomprehensible device that might just lie to you sometimes?  If so, this might be a kind of intellectual humility, acknowledging that they don’t fully understand how the meter works.  That’s still frustrating to me though, because we spend time at the beginning of the year exploring how the meter works — so they actually do have the information to explain what inside the meter could show a 0A reading.  Maybe those initial explanations about meters aren’t concrete enough — perhaps we should build one.  Sometimes students assume explanations are metaphors when actually they’re literal causes.
  • Is it related to treating automated devices in general as “too complicated for normal people to understand”?  If that what I’m reading into the situation, it explains why I have weirdly disproportionate irritation and frustration — I’m angry about this as a social phenomenon of elitism and disempowerment, and I assess the success of my teaching partly on the degree to which I succeed in subverting it… both of which are obviously not my students’ fault.

Other Thoughts

One possibility is that they’re actually proposing an idea similar to the database meaning of “null” — something like unknown, or undefined, or “we haven’t checked yet.”

I keep suspecting that this is about a need for more symbols.  Do we need a symbol for “we don’t know”?  It should definitely not be phi, and not the null symbol — it needs to look really different from zero.  Question mark maybe?

If students are not used to school-world tasks where the best answer is “that’s not known yet” or “that’s not measurable with our equipment”, they may be in the habit of filling in the blank.  If that’s the case, having a place-holder symbol might help.

This year, I’ve really started emphasizing the idea that zero, in a measurement, really means “too low to measure”.  I’ve also experimented with guiding them to decipher the precision of their meters by asking them to record “0.00 mA” as “< 5uA”, or whatever is appropriate for their particular meter.  It helps them extend their conceptual fluency with rounding (since I am basically asking them to “unround”); it helps us talk about resolution, and it can help in our conversation about accuracy and error bars.  Similarly,  “open” really means “resistance is too high to measure” (or relatedly, too high to matter) — so we find out what their particular meter can measure and record it as “>X MOhms”.

The downfall there is they start to want to use those numbers for something.  They have many ways of thinking about the “unequal” signs and one of them is to simply make up a number that corresponds to their idea of “significantly bigger”.  For example, when solving a problem, if they’re curious about whether electrons are actually flowing through air, they may use Ohm’s law and plug in 2.5 MOhms for the resistance of air. At first I rolled with it, because it was part of a relevant, significant, and causal line of thinking.  The trouble was that I then didn’t know how to respond when they started assuming that 2.5MOhms was the actual resistance of air (any amount of air, incidentally…), and my suggestion that air might also be 2.0001 MOhms was met with resistance. (Sorry, couldn’t resist). (Ok, I’ll stop…)

I’m afraid that this is making it hard for them to troubleshoot.  Zero current, in particular, is an extremely informative number — it means the circuit is open somewhere.  That piece of information can solve your problem, if you trust that your meter is telling you a true and useful thing. But if you throw away that piece of information as nonsense, it both reduces your confidence in your measurements, and prevents you from solving the problem.

Some Responses I Have Used

“Yes, your meter is showing 0.00 because there is 0.00 A of current flowing through it.”

“Don’t discriminate against zero — it isn’t nothing, it’s something important.  You’ll hurt its feelings!”

Not helpful, I admit!  If inquiry-based learning means that “students inquire into the discipline while I inquire into their thinking”*, neither of those is happening here.

Some Ideas For Next Year

  • Everyone takes apart their meter and measures the current, voltage, and resistance of things like the current-sense resistor, the fuse, the leads…
  • Insist on more consistent use of “less than 5 uA” or “greater than 2MOhms” so that we can practise reasoning with inequalities
  • “Is it possible that there is actually 0 current flowing?  Why or why not?”
  • Other ideas?

*I stole this definition of inquiry-based learning from Brian Frank, on a blog post that I have never found again… point me to the link, someone!

DSCF0545

Best hat ever

What I Did On My Blogging Hiatus

It’s been a busy and fruitful year and a half since I last wrote. Teaching highlights:

  • I finally got my teaching to play nice with the rest of my life — down from an abominable 80-100 hours of work per week to a manageable 60 (hint: standards-based grading was part of the solution, not the problem).
  • I noticed that standards-based grading and inquiry-based learning (I aspire to something along these lines) were not just challenging my students understanding of “right and wrong answers” on tests, but also their understanding of “right and wrong” moral behaviour in the world.  No, really.  I saw a sharp uptick in classroom conflict (about course ideas), out-of-class conflict (about everything else), and tearful moral crises.
  • I found a balance between inquiry-based learning and you-have-to-know-this-because-employers-say-so that lets me sleep at night.
  • I urgently started learning and practicing ways to help students enter peacefully into disagreement.  My classroom management got 100 times better, partly because I improved our beginning-of-year conversation about community agreements, partly because of these unusually useful online courses, partly because I got better at noticing and encouraging these “intellectual traits“.

Not-Directly-Teaching-Related Highlights

  • I took a semester off using my contract’s deferred salary plan, from January – June 2014
  • I learned to camp solo in the backcountry, including some winter trips
  • I successfully applied for a reduced instructional assignment for the current academic year — this means 50% work for 50% pay (so my workload is now a charming 30 hours per week)
  • I studied community-based conflict mediation techniques at the Tatamagouche Centre, Pendle Hill, and a few other places
  • I spent a lot of time hiking, snowshoeing, skiing, kayaking, and snorkelling
  • I joined a band that plays Turkish and Balkan music for folk dance parties.  No really…

Topics I May Write About Soon

  • If you can’t get disagreement, does that mean it’s the wrong question?
  • How can we “spread the no” — so one person isn’t left alone raising a point?
  • Single-system thinking vs. multi-system thinking (and how to convert between them)
  • Making the process of abstraction visible and student-directed
  • Do students have trouble distinguishing between “there is none” (zero) and “we don’t know how much” (null)?
  • Peak-to-peak amplitude “isn’t a subtraction… it’s just a difference.”  What does this mean and where can we go with it?
  • What are all the possible things that “DC” can mean?

I did my first round of interim feedback last week.  I asked students to comment on their DC Circuits course:

  • What do you like?
  • What do you dislike?
  • How are we doing with respecting our class norms?

Summary

Overall, students seem to appreciate the critical thinking approach.  They are warming to the idea that the purpose of a question is not necessarily to catch someone out, and they are noticing the difference in how it feels to think quickly vs. slowly, even if they don’t always love it.

Here’s a sample of the responses.  I’m especially excited about the ones in bold, because they represent things I’ve struggled with in the past.

I Like

“Shop work — If I’m confused about something in class, it really helps to understand better if I do it myself.”

“Being treated as thinkers. I ask a question and we discuss it.”

“Being challenged to think instead of just repeating what is taught like a robot.”

“When you work in a factory you get in a cycle of just doing and not thinking.”

“I like how it’s about science”

“Making things work and learning how it works”

“Everyone’s theories”

“Positive learning atmostphere”

“Makes me realize how much I like electronics”

“Friday assessments are not stressful.”

“Methodical, worksheets are precise.”

“Gets more interesting every day”

“I like that I am now better at asking questions about things that I don’t understand.”

I Dislike

“Nothing”

“A lot of questions go unanswered.  I understand we will learn for ourselves a lot but others are nice to have answered when brought up.”

“Pace is a bit fast. Need more time to understand theories.”

“Pace is a bit slow.  But I do realize we all have to be on the same level and learn the basics first.”

“I dislike feedback sheets, but I really don’t care how I learn,”

“Methodical, work sheets can sometimes slow down what should be a simple task.  Am willing to take good with the bad in this case.”

“In the beginning I was frustrated about the research we had to do on electrons, atoms, and charge.  I understand why you had us do that though.  I just found it hard and tedious.”

What’s Going Well With our Rights and Responsibilities?

“Respectful / Positive / Relaxed / Professional / No one makes fun of anyone else”

“Following directions”

“Work ethic”

“Everyone gets along”

“Giving everyone a say in discussions”

“Helping others”

“Answering questions”

“Every one is here to learn”

“Asking questions and being open about concerns”

“You are definitely challenging us and making us think.”

“I think we’re learning to say ‘I don’t know’ and allow for knowledge gaps.”

What Could We Improve About our Rights and Responsibilities?

“Nothing”

“Talking while others are talking.  ”

“Give more help time for those who are a little slower”

“More deeper explanation”

 

I’ve written before about using Diana Hestwood’s slide deck on growth mindset.  It’s called “How Your Brain Learns and Remembers,” and it uses an explanation of neuron biology to promote a growth mindset.  I found the slide deck pretty self-sufficient — it was complete enough not to require a presenter.  In the spirit of “Presentation Zen,” I converted it into a handout and asked students to complete the questions embedded in it.

Note: my students needed a full 20 minutes to complete this thoughtfully without feeling rushed.  This year I didn’t give them quite enough time and their responses are less personal than they have been in the past.

 

Comments From Students

“It takes more than insight of studying for dendrites to grow, it will take practice.”

“Good exercise, I recommend it for future students.”

“Neurons are amazing!”

 

I’ve done a better job of launching our inquiry into electricity than I did last year.  The key was talking about atoms (which leads to thoughts of electrons), not electricity (which leads to thoughts of how to give someone else an electric shock from an electric fence, lightning, and stories students have heard about death by electrocution).

The task was simple: “Go learn something about electrons, about atoms, and about electrical charge.  For each topic, use at least one quote from the textbook, one online source, and one of your choice.  Record them on our standard evidence sheets — you’ll need 9 in total.  You have two hours.  Go.”

I’ve used the results of that 2-hour period to generate all kinds of activities, including

  • group discussions
  • whiteboarding sessions
  • skills for note-taking
  • what to do when your evidence conflicts
  • how to decide whether to accept a new idea

We practiced all the basic critical thinking skills I hope to use throughout the semester:

  • summarizing
  • asking questions about something even before you fully understand it
  • identifying cause and effect
  • getting used to saying “I don’t know”
  • connecting in-school-knowledge to outside-school experiences
  • distinguishing one’s own ideas from a teacher’s or an author’s

I’m really excited about the things the students have gotten curious about so far.

“When an electron jumps from one atom to the next, why does that cause an electric current instead of a chemical reaction?”

“When an electron becomes a free electron, where does it go?  Does it always attach to another atom?  Does it hang out in space?  Can it just stay free forever?”

“What makes electrons negative?  Could we change them to positive?”

“Are protons the same in iron as they are in oxygen?  How is it possible that protons, if they are all the same, just by having more or fewer of them, make the difference between iron and oxygen?”

“If we run out of an element, say lithium, is there a way to make more?”

“Why does the light come on right away if it takes so long for electrons to move down the wire?”

“What’s happening when you turn off the lights?  Where do the electrons go?  Why do they stop moving?”

“What’s happening when you turn on the light?  Something has to happen to push that electron.  Is there a new electron in the system?”

“With protons repelling each other and being attracted to electrons, what keeps the nucleus from falling apart?”

“What happens if you somehow hold protons and electrons apart?”

“Would there be no gravity in that empty space in the atom?  I like how physics are the same when comparing a tiny atom and a giant universe.”

I’m experimenting with ideas from Nancy Kline’s Time To Think.  She discusses the importance of listening with undivided attention and respect, as a condition for helping people think well.  She asks people to keep their eyes on the speaker, using your face and body to show respect for their thinking.

In class today, I discussed the difference between critiquing the ideas and critiquing the person — that we aren’t here to agree thoughtlessly with everything anyone says, but to discuss (and possibly disagree with) ideas while respecting people as thinkers.

I asked students to show me, with their body and face, what it looks like if you do and do not respect someone.  Here’s what they did.

How to Show Disrespect and Inattention

  • Chat to each other
  • Take out your phone
  • Put your head down on desk
  • Face palm (or worse… DOUBLE face palm!)
  • Hide your eyes or look away

How to Show Respect and Full Attention

  • Eyes on speaker
  • Take notes
  • Smile
  • Ask questions
  • Add comments
  • Back and forth conversation, and (perhaps surprisingly)
  • Use friendly humour

I challenged us to use these techniques to convey our attention and respect as students presented their research.  So far conversations are lively: lots of questions, people are chiming in with supporting evidence, and wondering aloud.  They also joked and let their imagination run a bit with metaphors and analogies.  Sometimes the students asked me to summarize or synthesize if their lines of thought appeared to conflict, but mostly my role was to draw attention to positive moves like using diagrams or physically acting out electrical phenomena with their bodies, and to close the questions so that all groups would have time to present.

Improve Next Time

When someone asks a question that goes beyond the source, presenters often start presenting a new idea that seems plausible as if it’s supported by their research.  How do I help the presenter and the listeners distinguish between their wondering/remembering vs. the source’s information?

How are these students thinking about causality?

What should I ask next?

“Electrical charge is caused due to the movement of electrons from atom to atom.”

“The appearance and properties of atoms are changed cause protons are added or removed from it.”

“Atoms are the basic building block of matter because all matter contains atoms.”

“Atoms are electrons, protons, and neutrons and are bound together by magnetic forces.”

“Electrons excess makes charge negative, while protons excess makes charge positive.  Why are these the charges?”

“Electrons cancel out protons because of the protons’ positive charge.”

“Electrons likely move so slow due to the difficulty of exerting force on them.”

“Electrons in motion cause excess energy called tails.”

“When electrons are further away it causes them to have higher energy levels.”

“The positive parts ‘want’ electrons because they are oppositely charged and so they are attracted to each other.”

“A photon absorbed by an electron causes it to escape from the atom.”

“What causes charge to never be created or destroyed?”

 

 

 

 

 

Here are the resources I’ll be using for the Peer Assessment Workshop.

Participant Handout

Participants will work through this handout during the workshop.  Includes two practice exercises: one for peer assessment of a hands-on task, and one for peer assessment of something students have written.  Click through to see the buttons to download or zoom.

 

Feel free to download the Word version if you like.

Workshop Evaluation

This is the evaluation form participants will complete at the end of the workshop.   I really like this style of evaluation; instead of asking participants to rank on a scale of 1-5 how much they “liked” something, it asks whether it’s useful in their work, and whether they knew it already.   This gives me a lot more data about what to include/exclude next time.  The whole layout is cribbed wholesale, with permission, from Will At Work Learning.  He gives a thorough explanation of the decisions behind the design; he calls it a “smile sheet”, because it’s an assessment that “shows its teeth.”

Click through to see the buttons to download or zoom.

 

Feel free to download the Word version if you like.

Other Stuff

In case they might be useful, here are my detailed presentation notes.

This week, I’ve been working on  Jo Boaler’s MOOC “How To Learn Math.”  It’s presented via videos, forum discussions, and peer assessment; registration is still open, for those who might be interested.

They’re having some technical difficulties with the discussion forum, so I thought I would use this space to open up the questions I’m wondering about.  You don’t need to be taking the course to contribute; all ideas welcome.

Student Readiness for College Math

According to Session 1, math is a major stumbling block in pursuing post-secondary education.  I’m assuming the stats are American; if you have more details about the research that generated them, please let me know!

Percentage of post-secondary students who go to 2-year colleges: 50%

Percentage of 2-year college students who take at least one remedial math course: 70%

Percentage of college remedial math students who pass the course: 10%

My Questions

The rest, apparently, leave college.  The first question we were asked was, what might be causing this?  People hazarded a wide variety of guesses.  I wonder who collected these stats, and what conclusions they drew, if any?

Math Trauma

The next topic we discussed was the unusual degree of math trauma.  Boaler says this:

“When [What’s Math Got To Do With It] came out,  I was [interviewed] on about 40 different radio stations across the US and BBC stations across the UK.  And the presenters, almost all of them, shared with me their own stories of math trauma.”

Boaler goes on to quote Kitty Dunne, reporting on Wisconsin Radio: “Why is math such a scarring experience for so many people? … You don’t hear of… too many kids with scarring English class experience.”  She also describes applications she received for a similar course she taught at Stanford, for which the 70 applicants “all wrote pretty much the same thing.  that I used to be great at maths, I used to love maths, until …”.

My Questions

The video describes the connection that is often assumed about math and “smartness,” as though being good at English just means you’re good at English but being good at Math means you’re “smart.”  But that’s just begging the question.  Where does that assumption come from? Is this connected to ideas from the Renaissance about science, intellectualism, or abstraction?

Stereotype Threat

There was a brief discussion of stereotype threat: the idea that students’ performance declines when they are reminded that they belong to a group that is stereotyped as being poor at that task.  For example, when demographic questions appear at the top of a standardized math test, there is a much wider gender gap in scores than when those questions aren’t asked. It can also happen just through the framing of the task.  An interesting example was when two groups of white students were given a sports-related task.  The group that was told it measured “natural athletic ability” performed less well than a group of white students who were not told anything about what it measured.

Boaler mentions, “researchers have found the gender and math stereotype to be established in girls as young as five years old.  So they talk about the fact that young girls are put off from engaging in math before they have even had a chance to engage in maths.”

My Questions:

How are pre-school girls picking this stuff up?  It can’t be the school system. And no, it’s not the math-hating Barbie doll (which was discontinued over 20 years ago).  I’m sure there’s the odd parent out there telling their toddlers that girls can’t do math, but I doubt that those kinds of obvious bloopers can account for the ubiquity of the phenomenon.  There are a lot of us actually trying to prevent these ideas from taking hold in our children (sisters/nieces/etc.) and we’re failing.  What are we missing?

July 22 Update: Part of what’s interesting to me about this conversation is that all the comments I’ve heard so far have been in the third person.  No one has yet identified something that they themselves did, accidentally or unknowingly, that discouraged young women from identifying with math.  I’m doing some soul-searching to try to figure out my own contributions.  I haven’t found them, but it seems like this is the kind of thing that we tend to assume is done by other people.  Help and suggestions appreciated — especially in the first person.

Interventions That Worked

Boaler describes two interventions that had a statistically significant effect.  One was in the context of a first-draft essay for which students got specific, critical feedback on how to improve.  Some students also randomly received this line at the end of the feedback: “I am giving you this feedback because I believe in you.”  Teachers did not know which students got the extra sentence.

The students who found the extra sentence in their feedback made more improvements and performed better in that essay.  They also, check this out, “achieved significantly better a year later.”  And to top it all off, “white students improved, but African-American students, they made significant improvements…”  It’s not completely clear, but she seems to be suggesting that the gap narrowed between the average scores of the two groups.

The other intervention was to ask seventh grade students at the beginning of the year to write down their values, including what they mean to that student and why they’re important.  A control group was asked to write about values that other people had and why they thought others might have those values.

Apparently, the students who wrote about their own values had, by the end of the year, a 40% smaller racial achievement gap than the control group.

My Questions:

Holy smoke.  This just strikes me as implausible.  A single intervention at the beginning of the year having that kind of effect months later?  I’m not doubting the researchers (nor am I vouching for them; I haven’t read the studies).  But assuming it’s true, what exactly is happening here?

Archives

I’M READING ABOUT

Follow

Get every new post delivered to your Inbox.

Join 76 other followers