In teaching it can be difficult to know whether to start with a problem or a solution method. It seems more obvious to start with the problem, but sometimes it is better to introduce the possibility of the solution before posing the problem.
A common teaching method in mathematics is to teach the theory, followed by applications. Or not followed by applications. I seem to remember learning a lot of mathematics with absolutely no application – which was fine by me, because it was fun. My husband once came home from survey school, and excitedly told me that he was using complex numbers for some sort of transformation between two irregular surfaces. Who’d have thought? I had never dreamed there could be a real-life use for the square root of -1. I just thought it was a cool idea someone thought up for the heck of it.
But yet again we come to the point that statistics and operations research are not mathematics. Without context and real-life application they cease to exist and turn into … mathematics!
My colleague wrote a guest post about “applicable mathematics” which he separates from “applied mathematics”. Applicable maths appears when teachers make up applications to try to make mathematics seem useful. There is little to recommend about applicable maths. A form of “applicable maths” occurs in probability assessment questions where the examiner decides not to tell the examinee all the information, and the examinee has to draw Venn diagrams and use logical thinking to find out something that clearly anyone in the real world would be able to read in the data! I actually enjoy answering questions like that, and they have a point in helping students understand the underlying structure of the data. But I do not fool myself into thinking that they are anywhere near real-life. Nor are they statistics.
So the question is – when teaching statistics and operations research, should you start with an application or a problem or a case, and work from there to the theory? Or do students need some theory, or at least an understanding of basic principles before a case or problem can have any meaning? Or in a sequence of learning do we move back and forward between theory and application?
My first off response is that of course we should start with the data, as many books on the teaching of statistics teach us. Well actually we should start with the problem, as that really precedes the collection of the data. But then, how can we know what sorts of problems to frame if we don’t have some idea of what is possible through modelling and statistics? So should we first begin with some theory? The New Zealand Curriculum emphasises the PPDAC cycle, Problem, Plan, Data, Analysis, Conclusion. However, in order to pose the problem in the first place, we need the theory of the PPDAC cycle itself. The answer is not simple and depends on the context.
I have recently made a set of three videos explaining confidence intervals and bootstrapping. These are two very difficult topics that become simple in an instant. What I mean by that is, until you understand a confidence interval, it makes no sense, and you can see no reason why it should make sense. You go through a “liminal space” of confusion and anxiety. Then when you emerge out the other side, instantly confidence intervals make sense, and it is equally difficult to see what it was that made them confusing. This dichotomy makes teaching difficult, as the teacher needs to try to understand what made the problem confusing.
I present the idea of a confidence interval first. Then I use examples. I present the idea of bootstrapping, then give examples. I think in this instance it is helpful to delineate the theory or the idea in reasonably abstract form, interspersed with examples. I also think diagrams are immensely useful, but that’s another topic.
What prompted these thoughts about “which comes first” was a comment made about our “AtMyPace: Statistics” iOS app.
The YouTube videos used in AtMyPace:Statistics were developed to answer specific needs in a course. They generally take the format of a quick summary of the theory, followed by an example, often related to Helen and her business selling choconutties.
The iOS app, AtMyPace:Statistics was set up as a way to capitalise on the success of the YouTube videos, and we added two quizzes of ten True/false questions to complement each of the videos. We also put these same quizzes in our on-line course and found that they were surprisingly popular. In a way, they are a substitute for a textbook or notes, but require the person to commit one way or the other to an answer before reading a further explanation. We had happened on a effective way of engaging students with the material.
AtMyPace:Statistics is not designed to be a full course in statistics, but rather a tool to help students who might be struggling with concepts. We have also developed a web-based version of AtMyPace:Statistics for those who are not the happy owners of iOS devices. At present the web version is a copy of the app, but we will happily add other questions and activities when the demand arises.
I received the following critique of the AtMyPace: Statistics app:
“They are nicely done but very classical in scope. The approach is tools-oriented using a few “realistic” examples to demonstrate the tool. This could work for students who need to take exams and want accessible material.”
Very true. The material in AtMyPace:Statistics is classical in scope, as we focus on the material currently being taught in most business schools and first year statistics service courses. We are trying to make a living, and once that is happening we will set out to change the world!
The reviewer continues,
“ I think that in adult education you should reverse the order and have the training problem oriented. Take a six sigma DMAIC process as an example. The backbone is a problem scheduled to be solved. The path is DMAIC and the tools are supporting the journey. If you want to do it that way you need to tailor the problem to the audience. “
In tailored adult education it is likely that a problem-based approach will work. I would strongly recommend it.
I had an interesting discussion some time ago with a young lecturer working in a prestigious case-based MBA programme in North America. The entire MBA is taught using cases, and is popular and successful. My friend had some reservations about case-based teaching for a subject like Operations Research which has a body of skills which are needed as a foundation for analysis. Statistics would be similar. The question is making sure the students have the necessary skills and knowledge, with the ability to transfer to another setting or problem. Case-based learning is not an efficient way to accomplish this.
In another instance, David Munroe commented on our video “Choosing which statistical test to use”, which receives about 1000 views a week. In the video I suggest a three step process involving thinking about what kind of data we have, what kind of sample, and the purpose of the analysis. The comment was:
Myself I would put purpose first. 🙂 The purpose of the analysis determines what data should be collected – and more data is not necessarily more informative. In my view it is more useful to think ‘what am I trying to achieve’ with this analysis before collecting the data (so the right data have a chance to be collected). This in contrast to: collecting the data and then going ‘now what can I get from this data?’ (although this is sometimes an appropriate research technique). I think because we’ve already collected the data any time we’re illustrating particular modelling tools or statistical tests, we reinforce the ‘collect the data first then worry about analysis’ approach – at least subconsciously.
Thanks David! Good thinking, and if I ever redo the video I may well change the order. I chose the order I did, as it seemed to go from easy to difficult. (Actually I don’t remember consciously thinking about the order – it just fell out of individual help sessions with students.) And the diagram was developed in response to the rather artificial problems I was posing!
I’ll step back a bit and explain. One problem I have seen in teaching Statistics and Operations Research is that students fail to make connections. They also compartmentalise the different aspects and find it difficult to work out when certain procedures would be most useful. I wrote a post about this. In the statistics course I wrote a set of scenarios describing possible applications of statistical methods in a business context. The students were required to work out which technique to use in each scenario and found this remarkably difficult. They could perform a test on difference of two means quite well, but were hard-pressed to discern when the test should be used. So I made up even more questions to give them more practice, and designed my three step method for deciding on the test. This helped.
I had not thought of it as a way to decide in a real-life situation which test to use. Surely that would be part of a much bigger process. So my questions are rather artificial, but that doesn’t make them bad questions. Their point was to help students make linkages between different parts of the course. And for that, it works.
I would like to finish by saying how much I appreciate criticism. It is nice when people tell me they like my materials. I feel as if I am doing something useful and helping people. I get frequent comments of this type on my YouTube site. But when people make the effort to point out gaps and flaws in the material I am extremely grateful as it helps me to clarify my thinking and improve the approach. If nothing else, it gives me something to talk about in my blog. It is difficult producing material in a feedback vacuum. So keep it coming!