There is nothing like a new set of data, just sitting there in the computer, all ready for me to clean and graph and analyse and extract its secrets. I know I should be methodical in my approach, but sometimes I feel like a kid at Christmas, metaphorically ripping open the presents as I jump from graph to procedure, and back to graph again. I then have to go back and do it properly, documenting my approach and recording results, but that’s okay too. That can reveal a second lot of wonders as I sift and ponder.
This is what we should be enabling our students to do. Students need to catch the excitement of making a REAL graph of REAL data and finding out what it REALLY tells them. I have already blogged about the importance of real data in teaching, so those of you who have recently started following you might like to take a look. I also gave some suggestions on how to get real data.
I once dabbled in qualitative research. My PhD thesis used mixed methodology, which entailed recording interviews, transcribing and coding. It seemed like a fun idea at the time of my research proposal. Sifting through the interviews for gems of insight, getting to figure out common themes and finding linkages and generalities, seems appealing. And it was effective – I came up with a new idea for measuring educational effectiveness through opportunity to learn. But given the choice I won’t be doing it again. I truly admire qualitative researchers, as it takes so much more work than good old quantitative research. Much of it is just slog, reading and coding the interviews. It is really important and totally valid as far as I can see. It’s just that it’s a little – dare I say it – boring.
But I digress. This post is meant to be about questions. The questions you ask in class, the questions in the textbooks, the questions in on-line exercises and the questions in the tests and exams at the end of the unit of work.
In another previous post I lamented how “Statistics Textbooks suck out all the fun.” I cited the work by George Cobb, reviewing textbooks in 1987.
“Judge a book by its exercises and you cannot go far wrong,” said George Cobb.
It’s still true. The questions are what matter.
I have developed a course for learners who lack confidence in mathematics. There are on-line lecture videos and notes with audio, there are links to other materials, but where the real learning takes place is in the questions. Statistics is not a spectator sport – you have to get in and do it. Things can look easy when you see someone else do them like Olympic diving and producing Pivot-Charts in Excel, playing the piano and developing a linear programming model. But these skills require practice to become proficient. However there is no point in practising the wrong thing, or practising doing the right thing wrongly. Both these can happen when questions and feedback are not well designed.
Recently I have been immersed in questions. I am developing on-line materials for a textbook, and my own on-line materials for supporting high school students and teachers who are struggling with New Zealand’s innovative and world-leading statistics curriculum. From the textbook I have had to select problems to work through in demonstrations. For my own course, I am devising my own questions. As I do this I have become intimately involved with the NCEA questions, (National Certificate of Educational Achievement) as this is how the students will ultimately be tested. This combination has caused me to think a lot about how questions can help or hinder learning.
Good questions will do this. A good question will have context, real data and meaning. Statisticians don’t care about x. Mathematicians do. Asking students to interpret the Excel output of a regression of Y on X is a mathematical question and has no place in a statistics textbook or course. Asking how sales are affected by temperature, or grades are affected by time spent doing homework – these are meaningful examples.
A good question needs to test the thing you are trying to test. If you want to know if the student understands the implications of variability, getting them to calculate the standard deviation by hand is not going to do it. If you want a student to know how to use their calculator to find binomial probabilities, then that is what you should ask. But if you want them to be able to identify times when the binomial distribution is a good model of reality, then the question needs to be relevant to that.
There need to be enough questions. By working through multiple examples students come to understand what is specific to each context, and what is general to all examples. This sounds like “drill”, and I am a firm believer in consistent effort on worthwhile questions.
There has to be good feedback. Students need to be able to find out if they are correct, or “on the right track” as so many of my students ask me. Problem is, if you give them the answers, sometimes they just read them and we are back to the “statistics or operations research as a spectator sport” effect. And sometimes students don’t realise the nuances in what they have written, thinking it looks like the model answer, when really they have missed something vital. Often the teacher has to look after this, which requires a lot of time, though we are exploring ways of using on-line quizzes and exercises to enable more targetted feedback, more promptly.
Whatever the approach, we need to make sure that the questions we ask students to work on are leading them to discover the joy of statistics and operations research as well as passing the course.
4 Comments
I’m coming from a math standpoint, but I agree completely. I think there’s a beauty in understanding a concept that’s part of the reason why the proof exists in the first place. I mean, Einstein once wondered what light would look like if he was travelling at the speed of light and that question/curiosity led to a revolution in physics. When I’m reading papers or texts, I’m often asking myself things like what question(s) is this theorem answering and it helps a lot with understanding things. I mean, I have a story on my page that I wrote yesterday about how I grew up hating math and my story with irrational numbers because basically I wasn’t allowed to ask questions.
Math vs stats gets into different type of questions and stuff, but I think there’s a fundamental part of teaching that’s about encouraging students to ask questions.
[…] learning, transforming the discipline from mathematics to statistics. We can help students embrace the excitement of a true statistical investiation. But in this time of transition, the report-writing aspects are a problem. […]
[…] But there is that frisson of excitement as you finally finish cleaning your database and a freshly minted set of variables and observations beckons to you, with SPSS, SAS or even Excel at your fingertips. A new set of data is a new journey of discovery. Of course a serious researcher has already worked out a methodical route through her hypotheses… maybe. Or do we mostly all fossick about looking for patterns and insights, growing more and more familiar with the feel of the data, as if we were squeezing it through our fingers? So yes – my experience of data exploration is playful. It is an adventure, with wrong turns, forgetting the path, starting again, finding something only to lose it again and finally saying “enough” and taking a break, not because the data has been exhausted, but because I am. […]
[…] need to experience statistical analysis, to understand the process. They may also discover the excitement of a new set of data to explore, and the anticipation of an interesting result. These students may decide to study more statistics, […]