statistical property
Understanding Standard error calculation
[Q] Explain to me how Standard Error is able to do what it does?
[Question] How do you calculate the standard error?
What is the difference between standard deviation and standard error of the mean?
What is standard error?
What’s the difference between standard error and standard deviation?
What’s the difference between a point estimate and an interval estimate?
Videos
Hello!! I am trying to understand the workings behind the formula for standard error but I am getting very confused.
So from my understanding, standard error is the standard deviation of sample means, and the formula for it is:
n = sample size Standard error = standard deviation/sqrt(n)
I am confused as to whose standard deviation is this?
But this is what I gathered online; since the standard deviation for all the different samples is different, we need a standard deviation that is representative of all the samples, we might as well call this population standard deviation because it represents all the samples. Standard deviation from the samples which is equal to the population standard deviation is called the unbiased estimate of population standard deviation. So we are looking at calculating the unbiased estimate of population standard deviation from the sample dataset. Which means we have to consider n - 1 for the denominator, because of freedom of degrees.
Okay, now we have the unbiased estimate of population standard deviation, which is representative of all the standard deviations of the different samples.
Hence, standard error = unbiased estimate of population standard deviation//sqrt(n)
Is this correct? Any help is appreciated, thank you!!!!!!!!
My understanding is that standard error is essentially a measure of how different the means you obtain when you sample from a population will be. According to statistical theory, if you have a population, and you take a sample of this population, you can calculate standard deviation by comparing each value to the mean of your sample. But then, when you take that number and simply divide it by the square root of your sample size, then voila, you magically know how spread out the mean of every single sample you could ever take of that population is.
To me, that seems like a HUGE stretch that you can make such a huge assumption. It is already a bit of a stretch to think that your sample is a decent representation of an actual population mean, and sure, I get that these formulas are actually just estimates rather than concrete math. But I never would have guessed that the deviation of a sample, divided by a modification of the sample size, could tell you how much any mean sample could ever vary, ever.
Am I way off in assuming this? Am I missing something that should make me think more clearly about this all?
Guys up until this point I thought standard error was s/sqrt(n) where s is the standard deviation approximation and n is the number of samples. This is usually correct when I solve problems related to normal distriubtion confidence interval. In other cases, this doesn't work and I need to use the square root of the variance which doesn't give the same answer as before. I am confused which one to use and when.