Frank Noschese just posed some questions about “just trying something” in problem-solving, and why students seem to do it intuitively with video games but experience “problem-solving paralysis” in physics. When I started writing my second long-ish comment I realized I’m preoccupied with this, and decided to post it here.
What if part of the difference is students’ reliance on brute force approaches?
In a game, which is a human-designed environment, there are a finite number of possible moves. And if you think of typical gameplay mechanics, that number is often 3-4. Run left, run right, jump. Run right, jump, shoot. Even if there are 10, they’re finite and predictable: if you run from here and jump from exactly this point, you will always end up at exactly that point. They’re also largely repetitive from game to game. No matter how weird the situation in which you find yourself, you know the solution is some permutation of run, jump, shoot. If you keep trying you will eventually exhaust all the approaches. It is possible to explore every point on the game field and try every move at every point — the brute force approach (whether this is necessary or even desirable is immaterial to my point).
In nature, being as it is a non-human-designed environment, there is an arbitrarily large number of possible moves. If students surmise that “just trying things until something works” could take years and still might not exhaust all the approaches, well, they’re right. In fact, this is an insight into science that we probably don’t give them enough credit for.
Now, realistically, they also know that their teacher is not demanding something impossible. But being asked to choose from among infinite options, and not knowing how long you’re going to be expected to keep doing that, must make you feel pretty powerless. I suspect that some students experience a physics experiment as an infinite playing field with infinite moves, of which every point must be explored. Concluding that that’s pointless or impossible is, frankly, valid. The problem here isn’t that they’re not applying their game-playing strategies to science; the problem is that they are. Other conclusions that would follow:
- If there are infinite equally likely options, then whether you “win” depends on luck. There is no point trying to get better at this since it is uncontrollable.
- People who regularly win at an uncontrollable game must have some kind of magic power (“smartness”) that is not available to others.
And yet, those of us on the other side of the lesson plan do walk into those kinds of situations. We find them fun and challenging. When I think about why I do, it’s because I’m sure of two things:
- any failure at all will generate more information than I have
- any new information will allow me to make better quality inferences about what to do next
I don’t experience the game space as an infinite playing field of which each point must be explored. I experience it as an infinite playing field where it’s (almost) always possible to play “warmer-colder.” I mine my failures for information about whether I’m getting closer to or farther away from the solution. I’m comfortable with the idea that I will spend my time getting less wrong. Since all failures contain this information, the process of attempting an experiment generally allows me to constrain it down to a manageable level.
My willingness to engage with these types of problems depends on a skill (extracting constraint info from failures), a belief (it is almost always possible to do this), and an attitude (“less wrong” is an honourable process that is worth being proud of, not an indictment of my intelligence) that I think my students don’t have.
Richard Louv makes a related point in Last Child in the Woods: Saving Our Children From Nature-Deficit Disorder (my review and some quotes here). He suggests that there are specific advantages to unstructured outdoor play that are not available otherwise — distinct from the advantages that are available from design-y play structures or in highly-interpreted walks on groomed trails. Unstructured play brings us face to face with infinite possibility. Maybe it builds some comfort and helps us develop mental and emotional strategies for not being immobilized by it?
I’m not sure how to check, and if I could, I’m not sure I’d know what to do about it. I guess I’ll just try something, figure out a way to tell if it made things better or worse, then use that information to improve…
Good points and post. Some educators make a big deal that students must feel “in a safe environment” to risk discovering answers / risk using new learning behaviors. But there’s something to be said about working in unfamiliar territory also. There’s been some studies recently about how all the new safe/risk free playgrounds are harming kids’ minds. Richard Louv is definitely worth the read.
Interesting point about the “safe environment.” I’m going to go out on a limb and say that promising to create safety can actually lower students’ trust, because it is impossible. Either they believe our “safety” promise, in which case they probably end up feeling betrayed when we can’t always deliver, or they wisely don’t believe our promise, which leads them to conclude that we say things that can’t be trusted. What we can do is create a “safer” environment. Promising this to our students, and making it clear why, is probably a conversation worth having — could tie in to other ideas, like letting them see us struggle and be imperfect.
That said, when asking people to work in an environment that they find threatening, it’s important to lower the risk level as much as possible. So far, the only thing I’ve tried that contributed to a lower risk level was letting students control their re-assessment. Goal-less problems also seem like a good idea.
I think your analysis accurately explores students’ game player approaches and how they relate to general learning situations. I especially like that you get to useful insights by respecting the students and their abilities! Well stated!
Thanks Patti. I’ve been exploring the idea of explicitly teaching how to “draw inferences,” ever since I discovered that almost none of my students knew what an inference was (they seemed to not have the concept, not just the word). So I’ve been trying to study the way I draw inferences about my students knowledge. I am indebted to Brian Frank here, who uses the question “why might someone think this?“. It’s a useful practice for me to become aware of my assumptions, and I hope it will be a useful exercise for my students to solidy their ideas about entailment.
Mylene, it is interesting that students trying to use a plug and chug strategy, even when the problem at hand won’t support that strategy, is exactly the type of thing that we can think of as being reinforced in video games which have a limited set of moves/permutations.
I wonder if some small part of the success of the modeling curricula is that they take those infinite possibilities and reduce them down to some finite number of models which have been shown to work in various situations throughout the course. Modeling is in some way changing the course so that there is a limited move set to explore. Interesting.
Re: plug and chug… agreed. I got to thinking more about this, and about my own gaming experience. There are a lot of games in which only brute-force will allow you to get a “perfect” score (adventure games where points come from collecting items you don’t really need; platform games where hidden levels are concealed in places that it doesn’t make sense to go; puzzle games where randomly-generated elements sometimes make a high-score impossible, so that a player with their eyes on the leaderboard will be best served by restarting the game repeatedly until the random elements align favourably). These games tell us that strategy can get you to your goal (what an educator might call proficiency), but only brute-force can get you above and beyond the goal (mastery). That’s exactly backward from how I want my students to think about it.
Re: Modelling — makes sense. I’m keeping an eye open to how I might incorporate it in the future. I would want my students to go beyond using the model I tell them to use, into understanding why we have models at all (to constrain problems, partly). I also imagine that even a modelling curriculum could reinforce the brute-force assumption, if students don’t know how I chose the model. From their point of view, it might look as though teachers have spent years exploring the entire playing field to find the model that works. In fact, I choose a model by making inferences about its likelihood of working. I don’t need to try every single model in order to figure out which one is best, and I wish my students didn’t need to either.
I keep coming back to inferences, and the vocabulary of formal logic (including the concept of “elegance.”) More on this soon.
I am in the process of getting to know the modelling curriculum a bit better and most of the real insight has come from bloggers. In modelling the students develop the model collectively from performing an initial experiment that is meant to help the students discover the key features of the model and often see limitations of a previous one. But in practice I have no idea how much brute-force strategies would help those students.
I think the inferences you are able to make are due to your expert-level knowledge organization, allowing you to focus on the key features. And I completely agree that helping students get to a point where they can make those inferences themselves is ultimately what we want to do.
Interesting point about the importance of organizing knowledge. I guess where I want to start is helping students realize that there is such a thing as choosing a model by strategy — even if they’re not ready to do that yet.
There’s a ton of literature on teaching expert thinking. A good spot to start is a CWSEI document you will find on their instructor resource page: “teaching expert thinking” http://cwsei.ubc.ca/resources/instructor_guidance.htm
Thanks Joss — the whole collection looks interesting. Took a quick look at the “Expert Thinking” document and found it concise and highly useable.
[…] First one by Frank Noschese Problem-solving Paralysis, then a followup by Mylène: Problem-solving paralysis and the game layer. Frank noticed that when students were presented by a puzzle (he used an example of an Android […]