Jump to content

Talk:Sampling distribution

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

The shape of (or formula for) the sampling distribution depends on the distribution of the population, the statistic being considered, and the sample size used.

The prase shape of doesn't go well with the sampling distribution. The shape goes only with the curves which are a representation of the distribution and not the distribution itself. May be, this needs to be reworded. For the moment, I delete the phrase. -- Sundar 06:15, Sep 22, 2004 (UTC)
I concur. Stellaathena (talk) 22:58, 17 November 2020 (UTC)[reply]

I'd just like to say that I found this a very useful and clear description. I've just been struggling to understand it in a stats textbook and this has made it much clearer. Thanks! Lou.weird 11:22, 13 February 2007 (UTC)[reply]

Examples

[edit]

The case in the examples of sampling from a finite population of normal rvs needs attention and probably more explanation than can be fitted into the table. Either the finite population is treated as fixed, in which case the normal distribution of the values is irrelevant and the sample distribution relates to the values actually drawn for the finite population, or else the finite population is treated as itself random (which might be a strange idea). In the latter case the normal distribution is relevant, but the variance seems wrong ... consider the case of a complete sample n=N from the population of size N, then the variance should be the same as the variance of the mean of a sample of size N, σ2/N, whereas the formula gives zero. Melcombe (talk) 15:16, 8 April 2008 (UTC)[reply]

I also have a problem with this example. Strictly speaking, a finite population cannot be normally distributed. Is the intended interpretation that a finite sample of size N is drawn from a normal population, and then another sample of size n is drawn randomly? Perhaps this example should be clarified, perhaps with source cited, or removed. (Other than that, nice to have examples, thanks for adding.) RVS (talk) 23:59, 23 December 2008 (UTC)[reply]


Table of Sampling Distributions

[edit]

This page needs a table of the sampling distributions of the most important statistics and distributions: the mean and median are not enough. I wanted to check the one for the standard deviation for the normal distribution; is it N(sigma,sigma/sqrt(2n)) I can't find it anywhere in Wikipedia.

Conflict with central limit theorem?

[edit]

I'm a little bit confused about how the information in this article corresponds with that of the central limit theorem (then again I'm not a mathematician). If you would calculate a sample mean of say 100 different samples drawn from the same population, wouldn't you expect the sampling distribution of the sample mean to always be (approximately) distributed normally? The mean is a composite score of several random variables, right, so the central limit theorem would have to apply, no? So why does this article state that the original population would have to be distributed normally? --Steerpike (talk) 14:51, 23 September 2009 (UTC)[reply]

It doesn't say that. It says essentially: "if the original population is normal, then the distribution of the mean is normal". The more general case is covered by saying that "asymptotic distribution theory" can be used which, for the mean, would be the central limit theorem. Melcombe (talk) 16:05, 23 September 2009 (UTC)[reply]

Ok, thank you. --Steerpike (talk) 23:40, 23 September 2009 (UTC)[reply]


This page doesn't even mention the CLT :-( 131.118.200.52 (talk) 18:23, 16 May 2010 (UTC)[reply]

Bernoulli sampling distribution

[edit]

Is it correct that the sampling distribution for X-bar (the mean), where X_i are a random sample from a Bernoulli distribution, is a Binomial distribution? The sum of n Bernoulli random variables has a Binomial distribution, not the mean. Moreover, the Binomial distribution is for integer values only, so X-bar, which is a proportion, cannot have a Binomial dist. Oddjobn99 (talk) 03:43, 16 November 2009 (UTC)[reply]

Sampling vs. Asymptotic distributions

[edit]

Regarding the sentence: "The sampling distribution is frequently opposed to the asymptotic distribution, which corresponds to..."

Is there a source for this? I don't see the two terms as opposing, and it seems to be confounding "sampling distribution" with "finite sample distribution". I don't see why you can can't talk about the "asymptotic approximation to the sampling distribution" of some statistic. —Preceding unsigned comment added by 163.1.167.13 (talk) 22:45, 28 February 2011 (UTC)[reply]

Standard error Sample size Misinterpretaions?

[edit]

The CLT states that by collecting several sub-samples, (say “q” of 10-subgroups) of size “n” (ie n=30 individual readings per subgroup). WILL indicate the Stanard Error’ as any individual subgroup’ sigma dividided by by SQRT(n).

I have difficulty accepting this. I believe to determine the sample’s std. deviation should be divided by SQRT(q).

More subgroups would appear to me to tighten the variance of all the X-bars. But the sub sample (n) size not so much.

Ie As a supplier Q.E., I would rather see (15) subgroups of (15) samples, than one subgroup of (225) measurements. The former uses the CLT. The latter single sample is not a sampling distribution!


Stevegyro (talk) 23:07, 5 July 2018 (UTC)[reply]