You are currently browsing the category archive for the ‘Professional Development’ category.

I’ve had 2 workshop proposals accepted at the second Pan-Canadian Conference on Universal Design for Learning (UDL). I’m excited to be co-designing and co-presenting with two students who are extremely knowledgeable about disability rights and disability-based accommodations in educational institutions.  I’m going to use this post to brainstorm two workshops:

  • Student-Designed Curriculum
  • Student-Designed Assessment

Comic showing a worker shovelling stairs. He says: "All these other kids are waiting to use the stairs. When I get through shoveling them off, then I will clear the ramp for you." A wheelchair user replies, "But if you shovel the ramp, we can all get in!"If you’re not familiar with UDL, it emerged in the 90s, inspired by universal design in architecture.  In the architectural world, the goal is to design systems that are as broadly usable as possible, rather than creating “alternatives” to systems that create barriers for some users.  The classic example is about a ramp vs. stairs.  If you build stairs, you’ll need a ramp or an elevator or some other method for wheelchair users to bypass the stairs; but if you build a ramp, both walkers and wheelers can use it (along with those pushing strollers, hauling carts, using crutches, etc.).

Like any educational philosophy, people use it in lots of ways to mean lots of things, some of which contradict each other. One of the main proponents is the National Centre on Universal Design for Learning, which publishes guidelines recommending

  • Multiple means of engagement (i.e. many possible answers to “why am I learning this”)
  • Multiple means of representation (i.e. many possible ways I can access information)
  • Multiple means of action and expression (i.e. many possible ways to show what I know and can do)

I’m simplifying here, and the guidelines have evolved to include more complex ideas about executive function, self-regulation, etc.  You can see a comparison of the three version of the guidelines as they have evolved. It looks to me like the vocabulary has gotten more complex and the order has changed, but the ideas mostly have not.  I think they intend to shift away from assuming that learners need changing, to assuming that curriculum needs to be more accessible… I’m not totally sold that this infographic does justice to that idea.  But that’s ok.  A lot of things have gotten lumped under this umbrella, and I’m interested in a specific subset: creating learning environments where students have as much control as possible.

UDL Vs. Differentiation

Differentiated Instruction vs. Universal Design for LearningThis, for me, is what distinguishes UDL from a “differentiation” approach. Differentiation often focuses on being responsive to student difficulties caused by inaccessible materials, like a print handout; teachers have to create alternate materials (maybe providing an electronic copy) tailored to that student, and someone (probably the student) has to justify the increased work by submitting a documented diagnosis, or everyone would want it…

UDL focuses on removing barriers in the first place.  The vision that excites me is of choosing the most flexible options that inherently afford students to differentiate for themselves.  To give a slightly trivial example, if I provide electronic copies of everything to everyone, those who want print copies can have them; those who use screen readers can use them; those who use a tablet to magnify the document can do that.   That doesn’t mean the instructor no longer needs pay attention; students will still experience barriers that we haven’t anticipated or that aren’t in our power to change (in this example, I’ll have to notice who has high-speed internet access at home and who doesn’t; who has a tablet; etc).  But there should be fewer of them. And it should no longer depend on “proving” that you’re “needy” enough to “deserve” accommodations.  Unlike differentiation, UDL is already there before you show up, before you ask for it.  It becomes a flexibility that benefits everyone.

Standards-based grading is of course a part of that.  I  allow students to decide when they will reassess (within a certain window), in what format, and with what specific example.  They are welcome to write a quiz, but just as welcome to submit a paper, a screencast, a blog post, or a video; the subject can be an experiment, some research, an interview, etc., as long as it demonstrates their mastery of the skill in question.

Loose Means, Tight Goals… But How to Choose the Goals?

To realize the promise of UDL, I have to choose the skills with extreme care, and disaggregate them as much as possible.  Because I teach electronics circuits courses, the skills I have chosen are things like “interpret voltmeter measurements”, and “analyze a capacitor circuit”.  I have carefully removed the format from the skill; it’s not “write a lab report about a capacitor circuit.”  Should my students be required to know how to write a lab report?  Maybe.  But it’s not an outcome in my course, so it’s not a requirement in my assessment.

Can Students Decide What we Study?

I have also experimented with ways to allow students to design the curriculum.  I sometimes call this “emergent curriculum”, since it emerges from interests the students identify. But instructors often use that term to mean that the instructor inquires into the students’ learning and interests, then designs the curriculum accordingly.  I’m interested in pushing the locus of control as far toward the students as makes sense.

At the beginning of the year, I run a bunch of activities to find out what students know, what they wonder, and what they want to learn.  We make play-dough circuits, while they record and pass in a log sheet of ideas they had during the activity.  Another activity is a research prompt: “learn something about atoms you didn’t know before.”  I encourage them to record their questions as well as ideas.  All the question go into a question bank, where I tag them according to student, topic, whether they require research or experimentation, etc.  You can see a sample below; click through to make it bigger.

Spreadsheet sample showing questions students have asked

In the next class, I bring the list of questions; everyone’s assignment is to pick one and research it.  When we head to the shop for our lab period, I bring the same list of questions but filtered for testable questions; everyone’s job is to test one.  Or make up a new one.  I’ll ask them to run it by me, but I’ve never vetoed one; at most, I might insist on special safety precautions, or if the question is going to take all month to test, I might ask them to tackle a small piece of it.

Since the standard they are trying to meet is “interpret voltmeter measurements”, it really doesn’t matter what they measure.  Later standards in the same course are about Ohm’s Law, the voltage divider rule, etc, and it might seem that, at some point, they’d have to stop playing around and turn to Chapter 3.  But that’s the beauty of it. It doesn’t matter what you measure.  Ohm’s law will be there.  That’s why it’s called a law.  In the humanities, maybe you would say “it doesn’t matter what you read, it will have a metaphor in it.” Or “it doesn’t matter what you listen to — it will have chord progressions and cadences.”

Obviously, I triage the questions.  That goes back to the skills I’ve chosen; some of them students must master in order to pass, and some of them are optional extras.  I split the questions up accordingly, and ask people to choose from the “required” section before they choose from the “optional” section.  You can also imagine that there’s some direct instruction going on too — specific instructions on how to use a voltmeter, and required safety precautions.  Those become our class “Best Practices.”  After that, they can measure anything they want, as long as they do it “in accordance with best practices.”

One they’ve generated data, either research or measurement, on their chosen topics, I photocopy a class set of everyone’s results, they break into groups, and see what conclusions they can draw.  That brings up more questions, and off we go again.

In this system, it doesn’t matter what experiments they run, so long as they are about circuits.  I constrain the domain by providing batteries and lightbulbs (DC sources and resistive loads), so that we don’t end up trying to deal with topics from next semester.  Although, sometimes those come up, which is great.  The students can “interpret voltmeter measurements” about that as well.  You wouldn’t believe the extreme examples of heating effects this year’s students discovered!  Never seen anything like it.  And it didn’t matter.

My job does not spiral out of control trying to prepare lessons about every wacky topic they come up with.  My job is to provide a limited set of materials; if a question isn’t testable with those materials, they’ll pick another one.  My job is to photocopy their results; doesn’t matter what the results are about.  My job is to ask questions during their “peer review” sessions, to make sure they notice contradictions.  No matter what they investigate, my job doesn’t change, and the workload doesn’t get any heavier.

Why Present This at a UDL Conference?

I approached two students who are knowledgeable about disability accomodations in educational systems, and they agreed to work with me to design the workshops.  We had a wide-ranging brainstorming session, and these are the topics they thought were most important.

UDL talks about providing multiple ways for students to get motivated.  Rather than “providing” a limited set of motivations designed by me, I’m interested in supporting the infinite range of motivations students already have, based on their pre-existing interests and experience, by letting them build on any topic that relates to the course.

UDL talks about providing multiple ways for students to express their learning.  I’m interested in the infinite variety of ways they will come up with.   Because we have classroom Best Practices about measuring well and thinking well, I don’t have to make up a separate rubric for every different format a student wants to use. The same rubric applies to everything: safety, clarity, precision, causality, coherence with the evidence.


My workshop proposals are below.

  • If you only had 50 minutes to make sense of these ideas, what would you highlight?
  • What do I need to pay attention to so this doesn’t come across as only being useful if you teach science or engineering?
  • How can the format of the workshop itself be informed by UDL?
  • What does this make you wonder about?

Student-Designed Curriculum, With Rigour and Without Instructor Burnout

Can a group of students collaboratively design their own curriculum?  We say yes. One community college instructor and two students, both registered with Disability Services, will present techniques we have worked on together for four semesters. These include activities and record-keeping systems an instructor can use to map student interests and turn significant control of course content over to students.  There will be time for participants to create their own mapping format, or adapt an activity to increase its ability to help students design their path of inquiry.

UDL often focuses on the instructor’s ability to “provide” multiple means of representation, engagement, and expression.  We propose to frame our conversation around students’ power to “determine” those means, and then choose among them.  This vision of UDL puts real control over both format and content of a course into the hands of students.  It means that both instructors and students work explicitly to discover and value students’ pre-existing knowledge and outside-of-class experience, and dove-tails with practices of culturally responsive pedagogy.  We will discuss our experiences of working with class groups who are taking on this responsibility, and share techniques that increase the accessibility of this practice for both students and teachers. This includes design of classroom activities, record-keeping systems for large amounts of unstructured student data, and how to do this even in institutions with conventional expectations about course outlines, etc.  Our work is partly informed by David Hammer’s ideas of “Discovery Learning and Discovery Teaching” (Cognition and Instruction, Vol 15, No. 4, 1997).  We will invite participants to explore where and how these techniques could work in their courses.

Relevant UDL guidelines:

  • Promote expectations and beliefs that optimize motivation
  • Heighten salience of goals and objectives
  • Vary demands and resources to optimize challenge
  • Foster collaboration and community
  • Optimize individual choice and autonomy
  • Optimize relevance, value, and authenticity
  • Activate or supply background knowledge
  • Offer ways of customizing the display of information
  • Offer alternatives for auditory information
  • Offer alternatives for visual information
  • Guide appropriate goal-setting
  • Use multiple media for communication
  • Use multiple tools for construction and composition
  • Vary the methods for response and navigation

How Student-Designed Assessment Can Make UDL Easier for Students and Teachers

Standards-based grading is an assessment system focused on self-assessment and strategic improvement. It encourages everyone to make low-stakes mistakes while experimenting with many formats, and to learn from these mistakes about the content and about themselves. As a team of two community college students and one instructor who have worked together for four semesters, we will describe how this system can increase both rigour and accessibility. We will also provide participants with templates to use in experimenting with SBG in their own courses.

Two students and one instructor, all of whom have struggled with and at times left post-secondary institutions, come together to discuss assessment techniques that transform our experience of formal education.  Standards-based grading (SBG) is not one single system; it is a philosophy that can encompass a variety of strategies, including student-controlled due dates, recognition of prior learning, and most importantly student-designed assessments. We describe how we use SBG to create the greatest possible freedom for ourselves and others in our classroom community.  We also discuss when self-advocacy can decrease accessibility, and what to do instead.  We will invite participants to evaluate their course outcomes, experiment with writing new ones using a rubric for SBG and UDL, and test their choices against imagined alternative assessments.

Relevant UDL Guidelines:

  • Facilitate personal coping skills and strategies
  • Develop self-assessment and reflection
  • Increase mastery-oriented feedback
  • Optimize individual choice and autonomy
  • Optimize relevance, value, and authenticity
  • Minimize threats and distractions
  • Offer ways of customizing the display of information
  • Guide appropriate goal-setting
  • Support planning and strategy development
  • Enhance capacity for monitoring progress
  • Build fluencies with graduated levels of support for practice and performance
  • Vary the methods for response and navigation

Last week, I presented a 90-minute workshop on Assessment Survival Skills during a week-long course on Assessing and Evaluating.  Nineteen people attended the workshop.  Sixteen were from the School of Trades and Technology (or related fields in other institutions).  There were lively small-group discussions about applying the techniques we discussed.

Main Ideas

  1. Awesome lesson plans can help students learn, but so can merely decent lesson plans given by well-rested, patient teachers
  2. If grading takes too long, try techniques where students correct the mistakes or write feedback to themselves
  3. If they don’t use feedback that you provide, teach students to write feedback, for themselves or each other
  4. If students have trouble working independently in shops/labs, try demonstrating the skill live, creating partially filled note-taking sheets, or using an inspection rubric
  5. If you need more or better activities and assignments quickly, try techniques where students choose, modify, or create questions based on a reference book, test bank, etc.
  6. If students are not fully following instructions, try handing out a completed sample assignment, demonstrating the skill in person, inspection reports, or correction assignments

When I asked for more techniques, the idea of challenging students to create questions that “stump the teacher” or “stump your classmates” came up twice.  Another suggestion was having students get feedback from employers and industry representatives.

Participants’ Questions

At the beginning of the workshop, participants identified these issues as most pressing.

New Doc 47_1

Based on that, I focused mostly on helping students do their own corrections/feedback (#3), and how to generate practice problems quickly (#5).  Interestingly, those were the two ideas least likely to rate a value rating of 5/5 on the feedback sheets — but the most often reported as “new ideas”.  I think I did the right thing by skipping the techniques for helping students follow instructions (#6), since that was the idea people were most likely to describe as one they “already use regularly.” Luckily, the techniques I focused on are very similar to the techniques for addressing all the concerns, except for a few very particular techniques about reducing student dependence on the instructor in the shop/lab (#4), which I discussed separately.  I received complete feedback sheets from 18 participants and 16 of them identified at least one idea as both new and useful, so I’ll take that as a win.  Also, I got invited to Tanzania!

Participants talked a lot about what it’s like to have students who all have different skills, abilities, and levels of experience.  Another hot topic was how to deal with large amounts of fairly dry theory.  We talked a lot about techniques that help students assess their skills and choose what content they need to work on, so that students at all levels can challenge and scaffold themselves.  We also talked about helping students explore and choose and what format they want to use to do that, as a way of increasing engagement with otherwise dry material.  I didn’t use the term, but I was curious to find out in what ways Universal Design for Learning might be the answer to questions and frustrations that instructors already have.  If I ever get the chance, as many participants requested, to expand the workshop, I think that’s the natural next step.

Feedback About the Workshop

Overall feedback was mostly positive. Examples (and numbers of respondents reporting something similar):

“Should be a required course”

“I liked the way you polled the class to find out what points to focus on,” “tailored,” “customized” (4)

“Well structured,” “Interactive” (7)

“Should be longer” (11)

“Most useful hour and a half so far” (4)

Feedback About Handout

“If someone tries to take this from me, there’s gonna be a fight!”

Feedback About Me

“Trade related information I can relate to” (4)

“High energy,” “fun,” “engaging,” “interesting” (5)

“You were yourself, didn’t feel scripted,” “Loved your style,” “Passionate” (3)

“That’s the tradesperson coming out!”

Here are the resources I’ll be using for the Peer Assessment Workshop.

Participant Handout

Participants will work through this handout during the workshop.  Includes two practice exercises: one for peer assessment of a hands-on task, and one for peer assessment of something students have written.  Click through to see the buttons to download or zoom.


Feel free to download the Word version if you like.

Workshop Evaluation

This is the evaluation form participants will complete at the end of the workshop.   I really like this style of evaluation; instead of asking participants to rank on a scale of 1-5 how much they “liked” something, it asks whether it’s useful in their work, and whether they knew it already.   This gives me a lot more data about what to include/exclude next time.  The whole layout is cribbed wholesale, with permission, from Will At Work Learning.  He gives a thorough explanation of the decisions behind the design; he calls it a “smile sheet”, because it’s an assessment that “shows its teeth.”

Click through to see the buttons to download or zoom.


Feel free to download the Word version if you like.

Other Stuff

In case they might be useful, here are my detailed presentation notes.

This week, I’ve been working on  Jo Boaler’s MOOC “How To Learn Math.”  It’s presented via videos, forum discussions, and peer assessment; registration is still open, for those who might be interested.

They’re having some technical difficulties with the discussion forum, so I thought I would use this space to open up the questions I’m wondering about.  You don’t need to be taking the course to contribute; all ideas welcome.

Student Readiness for College Math

According to Session 1, math is a major stumbling block in pursuing post-secondary education.  I’m assuming the stats are American; if you have more details about the research that generated them, please let me know!

Percentage of post-secondary students who go to 2-year colleges: 50%

Percentage of 2-year college students who take at least one remedial math course: 70%

Percentage of college remedial math students who pass the course: 10%

My Questions

The rest, apparently, leave college.  The first question we were asked was, what might be causing this?  People hazarded a wide variety of guesses.  I wonder who collected these stats, and what conclusions they drew, if any?

Math Trauma

The next topic we discussed was the unusual degree of math trauma.  Boaler says this:

“When [What’s Math Got To Do With It] came out,  I was [interviewed] on about 40 different radio stations across the US and BBC stations across the UK.  And the presenters, almost all of them, shared with me their own stories of math trauma.”

Boaler goes on to quote Kitty Dunne, reporting on Wisconsin Radio: “Why is math such a scarring experience for so many people? … You don’t hear of… too many kids with scarring English class experience.”  She also describes applications she received for a similar course she taught at Stanford, for which the 70 applicants “all wrote pretty much the same thing.  that I used to be great at maths, I used to love maths, until …”.

My Questions

The video describes the connection that is often assumed about math and “smartness,” as though being good at English just means you’re good at English but being good at Math means you’re “smart.”  But that’s just begging the question.  Where does that assumption come from? Is this connected to ideas from the Renaissance about science, intellectualism, or abstraction?

Stereotype Threat

There was a brief discussion of stereotype threat: the idea that students’ performance declines when they are reminded that they belong to a group that is stereotyped as being poor at that task.  For example, when demographic questions appear at the top of a standardized math test, there is a much wider gender gap in scores than when those questions aren’t asked. It can also happen just through the framing of the task.  An interesting example was when two groups of white students were given a sports-related task.  The group that was told it measured “natural athletic ability” performed less well than a group of white students who were not told anything about what it measured.

Boaler mentions, “researchers have found the gender and math stereotype to be established in girls as young as five years old.  So they talk about the fact that young girls are put off from engaging in math before they have even had a chance to engage in maths.”

My Questions:

How are pre-school girls picking this stuff up?  It can’t be the school system. And no, it’s not the math-hating Barbie doll (which was discontinued over 20 years ago).  I’m sure there’s the odd parent out there telling their toddlers that girls can’t do math, but I doubt that those kinds of obvious bloopers can account for the ubiquity of the phenomenon.  There are a lot of us actually trying to prevent these ideas from taking hold in our children (sisters/nieces/etc.) and we’re failing.  What are we missing?

July 22 Update: Part of what’s interesting to me about this conversation is that all the comments I’ve heard so far have been in the third person.  No one has yet identified something that they themselves did, accidentally or unknowingly, that discouraged young women from identifying with math.  I’m doing some soul-searching to try to figure out my own contributions.  I haven’t found them, but it seems like this is the kind of thing that we tend to assume is done by other people.  Help and suggestions appreciated — especially in the first person.

Interventions That Worked

Boaler describes two interventions that had a statistically significant effect.  One was in the context of a first-draft essay for which students got specific, critical feedback on how to improve.  Some students also randomly received this line at the end of the feedback: “I am giving you this feedback because I believe in you.”  Teachers did not know which students got the extra sentence.

The students who found the extra sentence in their feedback made more improvements and performed better in that essay.  They also, check this out, “achieved significantly better a year later.”  And to top it all off, “white students improved, but African-American students, they made significant improvements…”  It’s not completely clear, but she seems to be suggesting that the gap narrowed between the average scores of the two groups.

The other intervention was to ask seventh grade students at the beginning of the year to write down their values, including what they mean to that student and why they’re important.  A control group was asked to write about values that other people had and why they thought others might have those values.

Apparently, the students who wrote about their own values had, by the end of the year, a 40% smaller racial achievement gap than the control group.

My Questions:

Holy smoke.  This just strikes me as implausible.  A single intervention at the beginning of the year having that kind of effect months later?  I’m not doubting the researchers (nor am I vouching for them; I haven’t read the studies).  But assuming it’s true, what exactly is happening here?

Thanks to all those who participated in the Blended Learning workshop.  Below, you’ll find links to the resources we used in the workshop.  There are also resources for several topics we didn’t have time to explore.  If you have questions, comments, or suggestions, don’t hesitate to let me know, by email or by leaving a comment at the bottom of the page.


Pre-Reading Assignment: Two contrasting views of blended learning.

Cities for Educational Entrepreneurship Trust publishes this website to promote blended learning, including the Rocketship School model.  Watch the video at the top of the page.

Dan Meyer discusses the evolution of the Rocketship model.  Skip the video if you don’t have time — the article speaks for itself.

Blended Learning Basics

This article on Classifying K-12 Blended Learning, sponsored by the Innosight Institute, gives clear definitions of some of the possibilities of what blended learning could mean.

Assessing Blended Learning Techniques

If we change our teaching in the hopes of improving something, how do we check if it worked?  This video about the effectiveness of science videos proposes a few ideas.

Resources on Blogging for Teachers

See the list at left, under “I’m Reading About,” for a list of topics including educational technology, literacy, teaching science and technology, and teaching problem-solving.

Resources on Document Scanning

I’ve written a number of posts about using a phone, tablet, or camera to capture quizzes or assignments, share in-class work on the projector, etc.  See especially The Scanner In My Pocket.

Resources on Flipped Teaching

Does a flipped classroom work better with before-class videos or before-class readings?  What are the pros and cons?  Student Preparation For Class and Khan Academy Is An Indictment of Education should get you started, and lead to lots more resources.

Resources on Mind-Mapping

Maria Andersen uses Mindomo to archive links, store videos, and keep notes about games for learning in every topic from music to astronomy to economics.  I use it for annotating and archiving collections of resources that wouldn’t fit on my computer. Finally, I have an easy way to tag my bookmarks, do parameterized searches, and access them from any online device.

Resources for Reading Comprehension

Here’s the exercise I demonstrated during the workshop, demonstrating the difference between “skimming for the main idea” and “finding the questions.”  I included a handout I use with my students, which you can download and modify.  Helping students notice where they get confused

Some ideas about using reading instead of videos in “flipped”-style teaching.  Includes examples of the kind of thinking students were doing while reading.

Examples of “reading comprehension constructors” I’ve used in class, asking students to give examples, draw diagrams, ask questions, and the ever-popular “vocabulary bingo”.

You can read about these techniques and more in Cris Tovani’s book Do I Really Have To Teach Reading Comprehension.

Resources for Screencasting

Free software for making screencasts includes Jing (download to your PC) and Screencast-o-matic (cloud-based, no download — works well in classrooms).  Here are some screencasts I created — one to introduce a new topic, one to walk through the solution to a math problem.  Neither of those approaches were very successful — students didn’t absorb or understand the information.  On the other hand, screencasts explaining procedures in software have been a big time-saver.

Resources for SmartBoards

Eric has created some how-to videos for getting the most out of your SmartBoards.  If you’re on the NSCC network, you can access them at S:\KI Staff\Sullivan, Eric.

Resources for Making Educational Videos

Dan Meyer makes beautiful videos and gives them away.  He also shares some secrets: use a tripod.  No, seriously — that’s one of the biggest differences between great and awful.  The other is this: use the video to show phenomena, not explanations.  Get the students hungry, then let them ask for the instructions and info.  Here’s an example where he takes a weak textbook problem and shows you how to make it shine.  He writes about math but I suspect this is widely applicable.

I’m presenting a workshop on using Prezi tomorrow.  The agenda includes

  • What is Prezi, and what are its pros and cons?
  • Best practices, including how and when to zoom, pan, or rotate
  • Evaluating a topic’s structure to determine whether it’s best suited to Prezi, PowerPoint, a text document, or another medium
  • Individual experimentation with Prezi
  • Tips and tricks for efficient use

Some of the resources I’ll use are linked here.  I’ll update the list after the workshop with additional resources, as determined by the conversation and interests of participants.

Workshop Examples

Prezi Tutorials

Information Design in General

  • PRISM scandal cheekily reinterpreted as a visual design problem, including before-and-after slide redesign
  • Dan Meyer explains “Kicking Out the Cliche” in classroom presentations.  “Very little that’s worth saying can be disintegrated into staccato bullet points. If I ever found myself tending towards bullet points in any presentation, I’d start massaging them into an essay-style handout.”  Wash it down with this description of how to create great handouts.
  • Presentation Zen: Simple Ideas on Presentation Design and Delivery, by Garr Reynolds, shows techniques that non-professionals can use to dramatically increase the impact of presentation visuals.  Advocates creating handouts instead of putting text on slides.
  • Garr Reynolds (of Presentation Zen fame) explains how to eliminate anything that is not essential to visually communicating your point.
  • David McCandless’s TED talk on The Beauty of Data Visualization shows dramatic examples of how the visual aspects of information design can change our relationship to information

Information Design in Prezi

Since I’m known to experiment compulsively with Web 2.0 and ed-tech tools, I’ve been asked to present a workshop for the campus PD week on blended learning.  This is an interesting tension for me for a few reasons.

Return on Investment Often Too Low

On one hand, I try to give a fair shake to any promising tool or technique.  On the other hand, most of the software, Web 2.0, or gadgets I’ve tried didn’t make it into my ongoing practice.  Reasons include

Bigger Gains from Assessment, Critical Thinking, and Quality Feedback

Although screencasting, “flipped classroom” experiments, and peer instruction have been helpful to me, they have not caused the massive gains in effectiveness that I got from skills-based grading, self and peer assessment, incorporating critical thinking throughout my curriculum, or shifting to inquiry-based modelling.  But, I wasn’t asked to present on those topics; I was asked to help people think about blended learning.  Planning for the workshop has been an interesting exercise in clarifying my thinking.

Blended Learning Is…

People seem to mean different things when they say “blended learning.” Some possible meanings:

Face-to-face meetings, in a group where everyone’s doing the same thing, during school hours, in classrooms, blended with

  • Learning at your own pace
  • Learning in another location
  • Learning at other times
  • Learning that does not have to be done in a specific order
  • Using a computer to learn (maybe online, maybe not)
  • Using an internet-based technology to learn
  • Learning that is customized for the student’s level
  • Learning whose pace, location, time, or order is controlled by the student

It’s hard to have a short conversation about this, because there are several independent variables.  Here are the ones I can name:

  • increasing the level of computerization
  • automating the process of providing students with work at their demonstrated level of achievement
  • increasing the data collected about student skills (naturally, computerized assessments offer different data than teacher observation…)
  • increasing the level of student control, but only in some areas (format and speed, not content)

Are We Doomed to Talk Past Each Other?

The thing I’m finding hardest to articulate is the need to disaggregate these variables.  Some advocates seem to assume that computers are the best (or only) way of adapting to student achievement, collecting data, or empowering students.  The conversation also runs afoul of the assumption that more computerization is good, because young people like computers.

Here’s my attempt at an outline for a conversation that can at least put these questions on the table.  I will provide a list of resources for participants to take away — so far, I’m thinking of including some resources on visual design (probably from dy/dan, as well as The Non-Designer’s Design Book and maybe Presentation Zen), as well as some of the posts linked above.  I’ll probably include at least one piece debunking the assumptions about “digital natives”.  Other suggestions?  If you were just starting to think about blended learning, what would you want to know more about?

The workshop is on Thursday — all feedback welcome.

Before the Workshop

  1. Watch this video about blended learning
  2. Read this blog post assessing the effectiveness of blended learning
  3. Use a feedback sheet to write a summary and keep track of questions that arise, and bring a copy with you to the workshop
  4. Use a GoogleDoc to vote on techniques you would like to know more about


  • Brainstorm in groups: What blended learning techniques have you used, if any?  What questions do you have so far?
  • Gather questions on front board

What is Blended Learning?

  • Explain common definitions
  • Ask group for other definitions
  • Explain common reasons for trying it
  • Ask group for other reasons why someone might try it
  • Each participant identifies advantages/goals they are most interested in working toward, and enters them into a worksheet
  • Discuss in small groups and modify/add to list if desired.

Examples of Blended Learning Techniques

Each presenter discusses the techniques they have used.

Participants take a moment at the end of each technique to evaluate whether it would contribute to their identified goals

How Can We Assess the Effectiveness of Blended Learning?


Each presenter discusses the results they noticed

Your Plans

  • Invite participants to think of something in their teaching that they would like to improve, and consider if any of the tools we’ve discussed can help.
  • Participants explain their plans in small groups, and keep track of questions that come up.
  • Questions added to the class list


Return to any questions that haven’t been answered.


  • Each presenter passes on any recommendations they have for teachers starting to explore blended learning.  Mine:
  • Learn about visual design
  • Practice learning new software — it’s a skill and you can get better
  • Learn to program — it helps you look at computer programs with a more critical eye
  • Check out the resources included with the day’s worksheet
  • Stick around and experiment with these tools if you would like

This just in from dy/dan: Jo Boaler (Stanford prof, author of What’s Math Got to Do With It and inspiration for Dan Meyer’s “pseudocontext” series) is offering a free online course for “teachers and other helpers of math learners.”  The course is called “How To Learn Math.”

“The course is a short intervention designed to change students’ relationships with math. I have taught this intervention successfully in the past (in classrooms); it caused students to re-engage successfully with math, taking a new approach to the subject and their learning. In the 2013-2014 school year the course will be offered to learners of math but in July of 2013 I will release a version of the course designed for teachers and other helpers of math learners, such as parents…” [emphasis is original]

I’ve been disheartened this year to realize how limited my toolset is for convincing students to broaden their thinking about the meaning of math.  Every year, I tangle with students’ ingrained humiliation in the face of their mistakes and sense of worthlessness with respect to mathematical reasoning. I model, give carefully crafted feedback, and try to create low-stakes ways for them to practice analyzing mistakes, understanding why math in physics gives us only “evidence in support of a model” — not “the right answer”, and noticing the necessity for switching representations.  This is not working nearly as well as it needs to for students to make the progress they need and that I believe they are capable of.

I hope this course will give me some new ideas to think about and try, so I’ve signed up.  I’m especially interested in the ways Boaler is linking these ideas to Carol Dweck’s ideas about “mindset,” and proposing concrete ideas for helping students develop a growth mindset.

Anyone else interested?

As the year winds down, I’m starting to pull out some specific ideas that I want to work on over the summer/next year.  The one that presses on me the most is “readiness.”  In other words,

  • What is absolutely non-negotiable that my students should be able to do or understand when they graduate?
  • How to I make sure they get the greatest opportunity to learn those things?
  • How do I make sure no one graduates without those things?  And most frustratingly,
  • How do I reconcile the student-directedness of inquiry learning with the requirements of my diploma?

Some people might disagree that some of these points are worth worrying about.  If you don’t teach in a trade school, these questions may be irrelevant or downright harmful.  K-12 education should not be a trade school.  Universities do not necessarily need to be trade schools (although arguably, the professional schools like medicine and law really are, and ought to be).  However, I DO teach in a trade school, so these are the questions that matter to me.

Training that intends to help you get a job is only once kind of learning, but it is a valid and important kind of learning for those who choose it.  It requires as much rigour and critical thinking as anything else, which becomes clear when we consider the faith we place in the electronics technicians who service elevators and aircraft. If my students are inadequately prepared in their basic skills, they (or someone else, or many other people) may be injured or die. Therefore, I will have no truck with the intellectual gentrification that thinks “vocational” is a dirty word. Whether students are prepared for their jobs is a question of the highest importance to me.

In that light, my questions about job-readiness have reached the point of obsession.  Being a technician is to inquire.  It is to search, to question, to notice inconsistencies, to distinguish between conditions that can and cannot possibly be the cause of particular faults.  However, teaching my students to inquire means they must inquire.  I can’t force it to happen at a particular speed (although I can cut it short, or offer fewer opportunities, etc.).  At the same time, I have given my word that if they give me two years of their time, they will have skills X, Y, and Z that are required to be ready for their jobs.  I haven’t found the balance yet.

I’ll probably write more about this as I try to figure it out.  In the meantime, Grant Wiggins is writing about a recent study that found a dramatic difference between high-school teachers’ assessment of students’ college readiness, and college profs’ assessment of the same thing.  Wiggins directs an interesting challenge to teachers: accurately assess whether students are ready for what’s next, by calibrating our judgement against the judgement of “whatever’s next.”  In other words, high school teachers should be able to predict what fraction of their students are adequately prepared for college, and that number should agree reasonably well with the number given by college profs who are asked the same question.  In my case, I should be able to predict how well prepared my students are for their jobs, and my assessment should match reasonably the judgement of their first employer.

In many ways I’m lucky: we have a Program Advisory Group made up of employer representatives who meet to let us know what they need. My colleagues and I have all worked between 15 and 25 years in our field. I send all my students on 5-week unpaid work terms.  During and after the work terms, I meet with the student and the employer, and get a chance to calibrate my judgement.  There’s no question that this is a coarse metric; the reviews are influenced by how well the student is suited to the culture of a particular employer, and their level of readiness in the telecom field might be much higher than if they worked on motor controls.  Sometimes employers’ expectations are unreasonably high (like expecting electronics techs to also be mechanics).  There are some things employers may or may not expect that I am adamant about (for example, that students have the confidence and skill to respond to sexist or racist comments).    But overall, it’s a really useful experience.

Still, I continue to wonder about the accuracy of my judgement.  I also wonder about how to open this conversation with my colleagues.  It seems like something it would be useful to work on together.  Or would it?  The comments on Wiggins’ post are almost as interesting as the post itself.

It seems relevant that most commenters are responding to the problem of students’ preparedness for college, while Wiggins is writing about a separate problem: teachers’ unfounded level of confidence about students’ preparedness for college.

The question isn’t, “why aren’t students prepared for college.”  It’s also not “are college profs’ expectations reasonable.”  It’s “why are we so mistaken about what college instructors expect?

My students, too, often miss this kind of subtle distinction.  It seems that our students aren’t the only ones who suffer from difficulty with close reading (especially when stressed and overwhelmed).

Wiggins calls on teachers to be more accurate in our assessment, and to calibrate our assessment of college-readiness against actual college requirements. I think these are fair expectations.  Unfortunately, assessment of students’ college-readiness (or job-readiness) is at least partly an assessment of ourselves and our teaching.

A similar problem is reported about college instructors.  The study was conducted by the Foundation for Critical Thinking with both education faculty and subject-matter faculty who instruct teacher candidates. They write that many profs are certain that their students are leaving with critical thinking skills, but that most of those same profs could not clearly explain what they meant by critical thinking, or give concrete examples of how they taught it.

Self-assessment is surprisingly intractable; it can be uncomfortable and can elicit self-doubt and anxiety.  My students, when I expect them to assess their work against specific criteria, exhibit all the same anger, defensiveness, and desire to change the subject as seen in the comments.  Most of them literally can’t do it at first.  It takes several drafts and lots of trust that they will not be “punished” for admitting to imperfection.  Carol Dweck’s work on “growth mindset” comes to mind here… is our collective fear of admitting that we have room to grow a consequence of “fixed mindset”?  If so, what is contributing to it? In that light, the punitive aspects of NCLB (in the US) or similar systemic teacher blaming, isolation, and lack of integrated professional development may in fact be contributing to the mis-assessment reported in the study, simply by creating lots of fear and few “sandboxes” of opportunity for development and low-risk failure.  As for the question of whether education schools are providing enough access to those experiences, it’s worth taking a look at David Labaree’s “The Trouble with Ed School.”

One way to increase our resilience during self-assessment is to do it with the support of a trusted community — something many teachers don’t have.  For those of us who don’t, let’s brainstorm about how we can get it, or what else might help.  Inaccurate self-assessment is understandable but not something we can afford to give up trying to improve.

I’m interested in commenter I Hodge’s point about the survey questions.  The reading comprehension question allowed teachers to respond that “about half,” “more than half,” or “all, or nearly all” of their students had an adequate level of reading comprehension.  In contrast, the college-readiness question seems to have required a teacher to select whether their students were “well,” “very well,” “poorly,” or “very poorly” prepared.  This question has no reasonable answer, even if teachers are only considering the fraction of students who actually do make it to college.  I wonder why they posed those two questions so differently?

Last but not least, I was surprised that some people blamed college admissions departments for the admission of underprepared students.  Maybe it’s different in the US, but my experience here in Canada is that admission is based on having graduated high school, or having gotten a particular score in certain high school courses.  Whether under-prepared students got those scores because teachers under-estimated the level of preparation needed for college, or because of rigid standards or standardized tests or other systemic problems, I don’t see how colleges can fix, other than by administering an entrance test.  Maybe that’s more common than I know, but neither the school at which I teach nor the well-reputed university that I (briefly) attended had one.  Maybe using a high school diploma as the entrance exam for college/university puts conflicting requirements on the K-12 system?  I really don’t know the answer to this.

Wiggins recommends regularly bringing together high-school and college faculty to discuss these issues.  I know I’d be all for it.  There is surely some skill-sharing that could go back and forth, as well as discussions of what would help students succeed in college.  Are we ready for this?

Michael Pershan kicked my butt recently with a post about why teachers tend to plateau in skill after their third year, connecting it to Cal Newport’s ideas such as “hard practice” (and, I would argue, “deep work“).

Michael distinguishes between practice and hard practice, and wonders whether blogging belongs on his priority list:

“Hard practice makes you better quickly. Practice lets you, essentially, plateau. …

Put it like this: do you feel like you’re a 1st year teacher when you blog? Does your brain hurt? Do you feel as if you’re lost, unsure how to proceed, confused?
If not, you’re not engaged in hard practice.”

Ooof.  On one hand, it made me face my desire to avoid hard practice; I feel like I’ve spent the last 8 months trying to decrease how much I feel like that.  I’ve tried to create classroom procedures that are more reuseable and systematic, especially for labs, whiteboarding sessions, class discussions, and model presentations.

It’s a good idea to periodically take a hard look at that avoidance, and decide whether I’m happy with where I stand.  In this case, I am.  I don’t think the goal is to “feel like a first year teacher” 100% of the time; it’s not sustainable and not generative.  But it reminds me that I want to know which activities make me feel like that, and consciously choose some to seek out.

Michael makes this promise to himself:

It’s time to redouble my efforts. I’m half way through my third year, and this would be a great time for me to ease into a comfortable routine of expanding my repertoire without improving my skills.

I’m going to commit to finding things that are intellectually taxing that are central to my teaching.

It made me think about what my promises are to myself.

Be a Beginner

Do something every summer that I don’t know anything about and document the process.  Pay special attention to how I treat others when I am insecure, what I say to myself about my skills and abilities, and what exactly I do to fight back against the fixed-mindset that threatens to overwhelm me.  Use this to develop some insight into what exactly I am asking from my students, and to expand the techniques I can share with them for dealing with it.

Last summer I floored my downstairs.  The summer before that I learned to swim — you know, with an actual recognizable stroke.  In both cases, I am proud of what I accomplished.  In the process, I was amazed to notice how much concentration it took not to be a jerk to myself and others.

Learn More About Causal Thinking

I find myself being really sad about the ways my students think about causality.  On one hand, I think my recent dissections of the topic are a prime example of “misconceptions listening” — looking for the deficit.  I’m pretty sure my students have knowledge and intuition about cause that I can’t see, because I’m so focused on noticing what’s going wrong.  In other words, my way of noticing students’ misconceptions is itself a misconception.  I’d rather be listening to their ideas fully, doing a better job of figuring out what’s generative in their thinking.

What to do about this? If I believe that my students need to engage with their misconceptions and work through them, then that’s probably what I need too. There’s no point in my students squashing their misconceptions in favour of “right answers”; similarly, there’s no point in me squashing my sadness and replacing it with some half-hearted “correct pedagogy.”

Maybe I’m supposed to be whole-heartedly happy to “meet my students where they are,” but if I said I was, I’d be lying. (That phrase has been used so often to dismiss my anger at the educational malpractice my students have endured that I can’t even hear it without bristling).  I need to midwife myself through this narrow way of thinking by engaging with it.  Like my students, I expect to hold myself accountable to my observations, to good-quality reasoning, to the ontology of learning and thinking, and to whatever data and peer feedback I can get my hands on.

My students’ struggle with causality is the puzzle from which my desire for explanation emerged; it is the source of the perplexity that makes me unwilling to give up. I hope that pursuing it honestly will help me think better about what it’s like when I ask my students to do the same.

Interact with New Teachers

Talking with beginning teachers is better than almost anything else I’ve tried for forcing me to get honest about what I think and what I do.  There’s a new teacher in our program, and talking things through with him has been a big help in crystallizing my thoughts (mutually useful, I think).  I will continue doing this and documenting it.  I also put on a seminar on peer assessment for first-year teachers last summer; it was one of the more challenging lesson plans I’ve ever written.  If I have another chance to do this, I will.

Work for Systemic Change

I’m not interested in strictly personal solutions to systemic problems.  I won’t have fun, or meet my potential as a teacher, if I limit myself to improving me.  I want to help my institution and my community improve, and that means creating conditions and communities that foster change in collective ways.  For two years, I tried to do a bit of this via my campus PD committee; for various reasons, that avenue turned out not to lead in the directions I’m interested in going.  I’ve had more success pressing for awareness and implementation of the Workplace Violence Prevention regulations that are part of my local jurisdiction’s Occupational Health and Safety Act.

I’m not sure what the next project will be, but I attended an interesting seminar a few months ago about our organization’s plans for change.  I was intrigued by the conversations happening about improving our internal communication.  I’ve also had some interesting conversations recently with others who want to push past the “corporate diversity” model toward a less ahistorical model of social justice or cultural competence.  I’ll continue to explore those to find out which ones have some potential for constructive change.

Design for Breaks

I can’t do this all the time or I won’t stay in the classroom.  I know that now.  As of the beginning of January, I’ve reclaimed my Saturdays.  No work on Saturdays.  It makes the rest of my week slightly more stressful, but it’s worth it.  For the first few weeks, I spent the entire day alternately reading and napping.  Knowing that I have that to look forward to reminds me that the stakes aren’t as high as they sometimes seem.

I’m also planning to go on deferred leave for four months starting next January.  After that, I’ve made it a priority to find a way to work half-time.   The kind of “intellectually taxing” enrichment that I need, in order for teaching to be satisfying, takes more time than is reasonable on top of a full-time job.  I’m not willing to permanently sacrifice my ability to do community volunteer work, spend time with my loved ones, and get regular exercise. That’s more of a medium-term goal, but I’m working a few leads already.

Anyone have any suggestions about what I should do with 4 months of unscheduled time starting January 2014?