statistical property
Videos
What is standard error?
What’s the difference between standard error and standard deviation?
What’s the difference between a point estimate and an interval estimate?
In the usual situation, the actual standard error of the mean is $\sigma/\sqrt{n}$, but you almost never know $\sigma$ (the population standard deviation), so you have to estimate it. Consequently, the estimated standard error of the mean is usually taken to be $s/\sqrt{n}$.
Some sources call that "the standard error of the mean", but strictly it's an estimate of that.
When you divide by n, it's the "population". But that's theoretical hogwash, you never know the SD of a population or if you did why the heck wouldn't you know the mean?
Using the sample SD is fine, but the SD/root-n estimator is biased due to using the same data to estimate two quantities. The degrees-of-freedom correction: SD / root(n-1) gives the minimum variance unbiased estimator.
Guys up until this point I thought standard error was s/sqrt(n) where s is the standard deviation approximation and n is the number of samples. This is usually correct when I solve problems related to normal distriubtion confidence interval. In other cases, this doesn't work and I need to use the square root of the variance which doesn't give the same answer as before. I am confused which one to use and when.