The question of whether to teach explicitly the Central Limit Theorem seems to divide instructors along philosophical lines. Let us look first at these lines.

There are at least three different areas of activity within the discipline of statistics. These are

- Theory of statistics and research into statistics
- Practice of statistics
- Teaching statistics and related research

The theory of statistics is mathematical. It is taught and practised in Mathematics and Statistics Departments of Universities. It is possible to be an expert on the theory and mathematics of statistics while having little contact with real data. The theory provides underpinnings to the practice of statistics. It is vital that some people know this – but not most of us. One would hope that people employed as statisticians would have a sound understanding of both the theoretical and applied aspects of statistics. This relates strongly to the research into statistics, which seems to be very mathematical, from my perusal of journals. This research advances the theory and use of statistical methods and philosophy.

The practice of statistics occurs in many, many areas, particularly in universities. Most postgraduate courses require some proficiency in the application of statistical methods. Researchers in areas as diverse as psychology, genetics, market research, education, geography, speech therapy, physiotherapy, mechanics, management, economics and medicine all use statistical methods. Some researchers have a deep understanding of the theory of statistics, but most aim to be safe and competent practitioners. When they get to the tricky bits they know to ask a statistician, but most of the day-to-day data generation, collection and analysis is within their capability.

Then there is the teaching of statistics. The level of applicability and theory taught will depend on the context. An instructor in statistics (in a non-service course) in a Department of Mathematics would tend towards the mathematical aspects, as that is most appropriate to the audience. However in just about every other setting the emphasis will be on the practical aspects of data collection and inference. This treatment of statistics is explicable, accessible and interesting to just about anyone, whereas only the mathematically inclined are likely to get excited about the theory of statistics.

There is another growing area, which is the research into the teaching and learning of statistics. This informs and is informed by the other areas, as well as general educational research and cognitive psychology. Much of my thinking comes from this background. An overview of some of the material relating to college level can be found in this literature review. The general topic of How Students Learn Statistics is introduced in this early paper by Joan Garfield (1995), a leader in the field of statistics education research.

Statistics is gradually making its way into the school curriculum internationally, and in New Zealand has become a separate subject in the final year of schooling. There are philosophical issues arising as most of the teachers of statistics are mathematicians, and some tend towards the beauty and elegance of the formulas, proofs etc. The aim of the curriculum, however, is more towards statistical investigations and statistical literacy. There are fuzzy, dirty, ambiguous, context driven explorations with sometimes extensive write-ups. There is discussion and critique of statistical reports. There are experiments which may or may not produce usable results. Some of this is well into the realms of social science and well away from what mathematicians find appealing or even comfortable. In another life I can hear myself saying, “I didn’t become a maths teacher to mark essay questions!” There is a bit of a mismatch between the skill-set and attitudes of the teachers and the curriculum.

One place where this is particularly evident is in the question of teaching the Central Limit Theorem. Mathematicians like the Central Limit Theorem and it seems that they like to teach it. One teacher states “The fact that the CLT is to be de-emphasised in Yr 13 is a major disappointment to me…” This statement prompted this post. I agree that the CLT is neat. It is really handy. And it makes confidence interval calculation almost trivial. There are cool little exercises you can do to illustrate it. It is the backbone of traditional statistical theory.

However, teaching and learning do not always go hand in hand. I wonder how many students really do internalise the Central Limit Theorem. Evidence says not many. Chance, Delmas and Garfield, in “The challenge of developing statistical literacy reasoning and thinking” (Ben Zvi and Garfield 2004) state: “Sampling distributions is a difficult topic for students to learn. A complete understanding of sampling distributions requires students to integrate and apply several concepts from different parts of a statistics course and to be able to reason about the hypothetical behavior of many samples – a distinct, intangible thought process for most students. The Central Limit Theorem provides a theoretical model of the behavior of sampling distributions, but students often have difficulty mapping this model to applied contexts. As a result students fail to develop a deep understanding of the concept of sampling distributions and therefore often develop only a mechanical knowledge of statistical inference. Students may learn how to compute confidence intervals and carry out tests of significance, but the are not able to understand and explain related concepts, such as interpreting a p-value.”

I have a confession to make. I didn’t teach the Central Limit Theorem. It never seemed as if it were going to help my students understand what was going on. For a few years I made them do a little simulation exercise which helped them to see why the square-root of n occurred in the denominator of the formula for the standard error. That was fun and seemed to help. But the words “Central Limit Theorem” seldom passed my lips in my twenty years of instruction.

What has helped immeasurably have been videos, beginning with “Understanding the p-value” and plenty of different examples and exercises using confidence intervals and hypothesis tests. (Another confession – I taught traditional statistical inference, not resampling. My excuse was that I didn’t know any better, and I had to stay in parallel with the course provided by the maths department.) What I have found from my own experience as a learner and as a teacher is that **students learn to understand statistics by DOING statistics**.

The Central Limit Theorem states that regardless of the shape of the population distribution, the distribution of sample means is normal if the sample size is large. This was a really brilliant model for when simulation and resampling was impossible. The Central Limit Theorem makes it possible to calculate confidence intervals for population means from sample data. It is the reason why most statistical procedures either assume normality at some point, or take steps to correct for the lack thereof. (See the paper by Cobb I referred to extensively in last week’s post.)

In a curriculum that develops from informal inference to formal inference using resampling, there is no need to call on the Central Limit Theorem. With resampling we use the distribution of the sample as the best estimate of the distribution of the population. True, it is quicker to use the old method of plug the values in the formula. However it isn’t much quicker than using the free iNZight software for resampling.

At high school level we want students to get an understanding of what inference is. (I would suggest my Pinkie Bar lesson as a good way of introducing the rejection part of Cobbs mantra, Randomise, Repeat, Reject.) I’m not convinced that teaching the Central Limit Theorem, and formula-based Confidence intervals for means and proportions lead to understanding. Research suggests that it doesn’t. I agree that statistical theorists, and educators and researchers should all understand the Central Limit Theorem. I just don’t think that it has a vital place in an innovative curriculum based on resampling.

I suspect that teachers fear that if their students are not taught the Central Limit Theorem and traditional confidence intervals at high school they will be at a disadvantage at university. I’d like to reassure them that it just isn’t true. All first year university statistics courses that I know of assume no prior knowledge of statistics. (The same is true of some second year courses as well!) The greatest gift a high school statistics teacher can give their students is an attitude of excitement and success, with a healthy helping of scepticism, and an idea of what inference is – that we can draw conclusions about a population from a sample. If my first year students had started from that point, half our work would have been done.

021 268 3529 Call Us

## 8 Comments

Thank you for your very interesting thoughts on the Central Limit theorem. I actually thought that the theorem had a wider brief than applying just to the distribution of sample means. I recall a scholarship examiner criticising teachers for not knowing the wider applications of the theorem to any variable that could be written as the sum of n independent variables from the same population. This is one of the reasons why I have always valued it, as a sort of unifying factor in so many areas of our curriculum. I readily apply it to totals, binomial, binomial proportion, as a general panacea for many situations. I personally have no trouble teaching it, but I do have a statistics background from Auckland University to help me a little. There are many applets around today which readily assist in the teaching and understanding of it, and it has been held up as one of the most remarkable theorems ever proved.

I read the article by Cobb and found it academically attractive, convincing and challenging. The questions that continue to irk me are i) how do you know when to make the call? ii) What are the errors involved in making such a call? I suppose that Hypothesis testing along with p-values took care of such issues and offered some form of security in accepting or rejecting such a hypothesis. I am just a little worried that objectivity is being lost, with personal interpretation being the prevailing arbiter which seems inadequate. I see similar things happening in other parts of the course too. Solution of equations by Newton Raphson methods is no longer seen as necessary since the graphics calculator has taken over. But now and then the calculator answer is not the correct one and some tinkering with initial values is needed before the correct answer emerges. I fear there is a danger that unlesss the necessary caveats are in place, some horrible errors are going to be made if unchallenged credence is accorded to computer generated results. Am I correct in assuming that this resampling, bootstrappingg methods will be incorporated into teriary programs in the near future, or are they seen as a secondary school developmental phase before traditional methods are taught at tertiary?

I have downloaded iNZight and when I get time ( the secondary teacher’s curse) I will have a play with it. My knowledge is very much emergent at the moment, but I do believe I know where the thinking and ideas are coming from. My thoughts at the moment sway from periods of excitement with the new ideas to ones of serious doubt with the word incestuous cropping up every now and then when I see just how much data is extracted from a reasonably small sample.

Regards Patrick McEntee

Thanks Patrick. I’m afraid I can’t answer all your questions, but will look into the issue of certainty versus opinion. I think one problem with the p-value method is that people fail to internalize the idea that if you use a rejection value of 0.05 you will be wrong in one out of twenty of your conclusions. This issue is exercising a great number of people, including the American Psychological Association (APA) who set the standard for such things in academic papers.

It is a VERY good question as to whether or when the universities will adopt resampling. I would probably push for a dual approach were I still in a position to do so. I’ll keep an eye out.

I quite like the Central Limit Theorem, but there are enough subtleties in what it requires and what it actually says that I’m not sure it is best taught to students at least in their introduction to statistics. I do explain its consequences as best I can, but if there’s no following the proof (and at the introductory level there’s not) it’s hard to say how much they should take away beyond that freak events aren’t likely to happen if you sample widely enough.

I’m certainly open to arguments for its importance as an introductory topic, though. I have much yet to learn about teaching it.

The Central Limit Theorem (CLT) leads to an SD for the sampling distribution of the mean that is proportional to 1/sqrt(n) only under the assumption of iid (independently and identically distributed) sample values. That assumption should not be taken for granted as somehow automagically satisfied, though many statistics courses (and pretty much all books that have ‘statistical learning’ or ‘data mining’ in the title) do so take it for granted! Data sets that show various forms of dependence are all around us — time series, multiple observations on each of a number of individuals, … So why is this point so wilfully and reprehensibly ignored?

There is an issue of how this point may be broached at an elementary level. At the very least, point out that in the “multiple observations on each of a number of individuals” situation, averages over individuals will be as lot closer to iid data values.

Ideas of sampling distribution provide the broad context in which the CLT as usually expounded is a special case. Simulation is more foundational than the mathematics, in that it can handle cases where the mathematics is difficult (so you would not touch it at an elementary level) or intractable. Thus if the 1/sqrt(n) version of the CLT appears at all (and yes, for well-roundedness it should ideally make an appearance), it should come as an “Oh, by the way” comment that if we can make iid assumptions, then this magical 1/sqrt(n) thing pops out. [Side note: At at an advanced level, one might explain how the 1/sqrt(n) multiplier can be modified for data that follow an order 1 autoregressive time series process.]

I would have felt very let-down by my education had it been omitted from my Statistics course. And now, of course, you can see it kick in right before your eyes with pretty simple simulations. It still amazes me just how elegant it is.

Whether the CLT belongs in Statistics or in Probability courses is more of a moot point. But there is no avoiding the concept of sampling distributions in statistics, and the CLT is one way of setting the scene for that – not the only way, but a good one.

Not lone ago on ANZStat we were tossing about a doo-lally article on whether or not students need to learn algebra anymore. I think that for statistics students, the CLT is a question almost of that order. To omit it would be to deprive students of a great part of the intellectual heritage of the subject, and that’s always bad. Giants are there so that you can stand on their shoulders.

Hi Bill – thanks for your comment. I think much of what you say depends on who the students are. I don’t think there is a blanket answer for all students.

[…] The Central Limit Theorem: To teach or not to teach (learnandteachstatistics.wordpress.com) […]

[…] Students who want that sort of thing can read about it in their textbooks or look it up online. The New Zealand school curriculum does not include it, as I explained in 2012. […]