In both scenarios and are unknown. The bottom formula is using the assumption that and attempting to estimate that shared variance by pooling all observations together and calculating a weighted mean. Thus, the factor on the left plays the role of both and in the bottom equation. This method is usually used when you have small sample sizes and the equal variance assumption is plausible.

Answer from BigOrange101 on Stack Exchange
🌐
Scribbr
scribbr.com › home › what is standard error? | how to calculate (guide with examples)
What Is Standard Error? | How to Calculate (Guide with Examples)
June 22, 2023 - The standard error of the math scores, on the other hand, tells you how much the sample mean score of 550 differs from other sample mean scores, in samples of equal size, in the population of all test takers in the region. The standard error of the mean is calculated using the standard deviation and the sample size. From the formula, you’ll see that the sample size is inversely proportional to the standard error.
Top answer
1 of 2
2

In both scenarios and are unknown. The bottom formula is using the assumption that and attempting to estimate that shared variance by pooling all observations together and calculating a weighted mean. Thus, the factor on the left plays the role of both and in the bottom equation. This method is usually used when you have small sample sizes and the equal variance assumption is plausible.

2 of 2
2

There are two different versions of the two-sample t test in common usage.

Pooled. The assumption, often unwarranted in practice, is made that the two populations have the same variance In that case one seeks to estimate the common population variance, using both of the sample variances, to obtain what is called a pooled estimate .

If the two sample sizes are equal, then this is simply But if sample sizes differ, then greater weight is put on the sample variance from the larger sample. The weights use the degrees of freedom instead of the The first factor under the radical in your is Under the assumption of equal population variances, the standard deviation of (estimated standard error) is your .

Consequently, the -statistic is . Under the null hypothesis that population means and are equal, this -statistic has Student's T distribution with degrees of freedom.

Separate variances (Welch). The assumption of equal population variances is not made. Then the variance of is This variance is estimated by So the (estimated) standard error is So your first formula is has typos and is incorrect. This may account for "ludicrous" difference you are getting. If , then you should get But the two (estimated) standard errors will not necessarily be equal if sample sizes differ.

An crucial difference between the pooled and Welch t tests is that the Welch test uses a rather complicated formula involving both sample sizes and sample variances for the degrees of freedom (DF). The Welch DF is always between the minimum of and on the one hand and on the other. So if both sample sizes are moderately large both -statistics will be nearly normally distributed when The Welch -statistic is only approximate, but simulation studies have shown that it is a very accurate approximation over a large variety of sample sizes (equal and not) and population variances (equal or not).

The current consensus among applied statisticians is always to use the Welch t test and not worry about whether population variances are equal. Most statistical computer packages use the Welch procedure by default and the pooled procedure only if specifically requested.

People also ask

What is standard error?
The standard error of the mean, or simply standard error, indicates how different the population mean is likely to be from a sample mean. It tells you how much the sample mean would vary if you were to repeat a study using new samples from within a single population.
🌐
scribbr.com
scribbr.com › home › what is standard error? | how to calculate (guide with examples)
What Is Standard Error? | How to Calculate (Guide with Examples)
What’s the difference between standard error and standard deviation?
Standard error and standard deviation are both measures of variability. The standard deviation reflects variability within a sample, while the standard error estimates the variability across samples of a population.
🌐
scribbr.com
scribbr.com › home › what is standard error? | how to calculate (guide with examples)
What Is Standard Error? | How to Calculate (Guide with Examples)
What’s the difference between a point estimate and an interval estimate?
Using descriptive and inferential statistics, you can make two types of estimates about the population: point estimates and interval estimates. · A point estimate is a single value estimate of a parameter. For instance, a sample mean is a point estimate of a population mean. · An interval estimate gives you a range of values where the parameter is expected to lie. A confidence interval is the most common type of interval estimate. · Both types of estimates are important for gathering a clear idea of where a parameter is likely to lie.
🌐
scribbr.com
scribbr.com › home › what is standard error? | how to calculate (guide with examples)
What Is Standard Error? | How to Calculate (Guide with Examples)
🌐
The BMJ
bmj.com › about-bmj › resources-readers › publications › statistics-square-one › 5-differences-between-means-type-i-an
5. Differences between means: type I and type II errors and power
February 9, 2021 - We saw in Chapter 3 that the mean of a sample has a standard error, and a mean that departs by more than twice its standard error from the population mean would be expected by chance only in about 5% of samples. Likewise, the difference between the means of two samples has a standard error. We do no
🌐
Calculator Academy
calculator.academy › home › standard error of difference calculator
Standard Error Of Difference Calculator - Calculator Academy
December 1, 2023 - To calculate the standard error of the difference, square the standard deviations of both samples, divide each by their respective sample sizes, add the two values, and take the square root of the sum.
🌐
Vassarstats
vassarstats.net › dist2.html
Standard Error of Sample-Mean Differences
where sd2 = the variance of the ... distribution of sample- mean differences, enter the mean and standard deviation (sd) of the source population, along with the values of na and nb, and then click the "Calculate" button....
🌐
Statistics LibreTexts
stats.libretexts.org › bookshelves › introductory statistics › openintro statistics (diez et al). › 5: inference for numerical data
5.3: Difference of Two Means - Statistics LibreTexts
April 24, 2022 - When assessing the difference in two means, the point estimate takes the form \(\bar {x}_1- \bar {x}_2\), and the standard error again takes the form of Equation \ref{5.4}. Finally, the null value is the difference in sample means under the null hypothesis. Just as in Chapter 4, the test statistic Z is used to identify the p-value. The formula for the standard error of the difference in two means is similar to the formula for other standard errors.
🌐
Uconn
researchbasics.education.uconn.edu › home › standard error of the mean difference
Standard Error of the Mean Difference | Educational Research Basics by Del Siegle
May 22, 2015 - In lieu of taking many samples one can estimate the standard error from a single sample. This estimate is derived by dividing the standard deviation by the square root of the sample size.
Find elsewhere
Top answer
1 of 3
9

You seem to be thinking that .

This is not the case for independent variables.

For independent,

Further,

(if the are independent of each other).

http://en.wikipedia.org/wiki/Variance#Basic_properties

In summary: the correct term:

has terms because we're looking at averages and that's the variance of an average of independent random variables;

has a because the two samples are independent, so their variances (of the averages) add; and

has a square root because we want the standard deviation of the distribution of the difference in sample means (the standard error of the difference in means). The part under the bar of the square root is the variance of the difference (the square of the standard error). Taking square roots of squared standard errors gives us standard errors.

The reason why we don't just add standard errors is standard errors don't add - the standard error of the difference in means is NOT the sum of the standard errors of the sample means for independent samples - the sum will always be too large. The variances do add, though, so we can use that to work out the standard errors.


Here's some intuition about why it's not standard deviations that add, rather than variances.

To make things a little simpler, just consider adding random variables.

If , why is ?

Imagine (for ); that is, and are perfectly linearly dependent. That is, they always 'move together' in the same direction and in proportion.

Then - which is simply a rescaling. Clearly .

That is, when and are perfectly positively linearly dependent, always moving up or down together, standard deviations add.

When they don't always move up or down together, sometimes they move opposite directions. That means that their movements partly 'cancel out', yielding a smaller standard deviation than the direct sum.

2 of 3
5

Algebraic intuition

The standard error of the mean for independent observations is where is the standard deviation.

So if we have two independent samples we have the standard errors for the means of group 1 and group 2.

If we square these values we get the variance of the mean:

The variance of the sum or difference of two independent random variables is the sum of the two variances. Thus,

So if we want the standard error of the difference we take the square root of the variance:

So I imagine this is intuitive if the component steps are intuitive. In particular it helps if you find intuitive the idea that the variance of the sum of independent variables is the sum of the variances of the component variables.

Fuzzy Intuition

In terms of more general intuition, if and then the standard error of the difference between means will be . It makes sense that this value of approximately 1.4 is greater than 1 (i.e., the variance of a variable after adding a constant; i.e., equivalent to one sample t-test) and less than 2 (i.e., the standard deviation of the sum of two perfectly correlated variables (with equal variance) and the standard error implied by the formula you mention: ).

🌐
Wikipedia
en.wikipedia.org › wiki › Standard_error
Standard error - Wikipedia
October 10, 2025 - Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases. The formula given above for the standard error assumes that the population is infinite.
🌐
YouTube
youtube.com › watch
Standard Error of the Difference - YouTube
A video showing how to calculate the Standard Error of the Difference and how to verbally explain your results!
Published   August 17, 2017
Views   38K
🌐
GraphPad
graphpad.com › support › faq › the-standard-error-of-the-difference-between-two-means
The standard error of the difference between two means - FAQ 1490 - GraphPad
Given the assumptions of the analysis (Gaussian distributions, both populations have equal standard deviations, random sampling, ...) you can be 95% sure that the range between -31.18 and 9.582 contains the true difference between the means of the populations the data were sampled from.
🌐
RPubs
rpubs.com › brouwern › SEdiff2means
RPubs - Standard error for the difference between 2 means
Standard error for the difference between 2 means · by Nathan Brouwer · Last updated almost 9 years ago · Hide Comments (–) Share Hide Toolbars ·
🌐
GraphPad
graphpad.com › guides › prism › latest › statistics › stat_the_se_of_the_difference_betwe.htm
GraphPad Prism 10 Statistics Guide - The SE of the difference between means
For multiple comparisons, the standard error of the difference is computed as sqrt(L*C*L'), where L is a vector/matrix of contrasts and C is a variance-covariance matrix at best-fit values.
🌐
Testbook
testbook.com › home › maths formulas › standard error formula
Standard Error Formula: Definition, Formula with Solved Examples
Learn Standard Error formula, which is (Standard Deviation of Sample) / √(Sample Size). It measures how much sample mean might deviate from true population mean.
🌐
Investopedia
investopedia.com › terms › s › standard-error.asp
Standard Error (SE) Definition: Standard Deviation in Statistics Explained
May 16, 2025 - You can also conceptualize this by thinking of a bell curve where the center represents the null hypothesis. If the standard error is large, the curve is wider and flatter, meaning observed results must be farther from the center to be considered unusual. If the standard error is small, the curve is tighter, and even small differences from the center appear extreme.
🌐
Reddit
reddit.com › r/statistics › calculating standard error of difference between two coefficients from same regression
r/statistics on Reddit: Calculating standard error of difference between two coefficients from same regression
January 7, 2019 -

Let's say I recover B1 and B2 from a regression (y = B0 + B1x1 + B2x2), and a quantity of interest is the difference between the two: B2 - B1. What's the formula for the standard error of this quantity?

🌐
BYJUS
byjus.com › maths › standard-error
Standard Error Meaning
June 14, 2021 - The regression line depreciates the sum of squared deviations of prediction. It is also known as the sum of squares error. SEE is the square root of the average squared deviation. The deviation of some estimates from intended values is given by standard error of estimate formula.
🌐
Standard Deviation Calculator
standarddeviationcalculator.io › standard-error-calculator
Standard Error Calculator
SE = S/√n = (36.78)/(√49) = (36.78)/(7) = 5.25429 Thus, the standard error for given summary data is equal to “5.25429”. To verify the result of this summary data use our above Standard error Calculator. What is the difference between raw and summary data?