Standard error tells you how much a sample estimate would usually change from one random sample to another. On this page, that estimate is the sample mean. It measures the typical sampling variation of the mean, not the spread of the raw data.
For the sample mean, the standard error is
if the population standard deviation is known. In practice, is often unknown, so you estimate it with the sample standard deviation :
This formula is for the mean under the usual setup: the observations are treated as an independent random sample, and you are asking about the precision of the sample mean. Smaller standard error means a more precise estimate.
What Standard Error Actually Measures
Standard error is about an estimate, not about individual observations. If you kept taking new samples of the same size from the same population, the sample mean would move around. The standard error describes the typical size of that movement.
That is why standard error gets smaller when gets larger. Averaging more observations usually makes the sample mean more stable from sample to sample, assuming the data collection process stays comparable.
Standard Error vs Standard Deviation
This is the distinction that causes most confusion. Standard deviation describes how spread out the data values are within one data set. Standard error describes how spread out a statistic, such as the sample mean, would be across many repeated samples.
For the mean, the two are connected by
when you are estimating from a sample. So standard error uses standard deviation, but they answer different questions.
Use this shortcut:
- Standard deviation asks, "How spread out are the data values?"
- Standard error asks, "How precise is my sample mean as an estimate?"
One Worked Example of Standard Error
Suppose a sample of students has a mean test score of and a sample standard deviation of .
The estimated standard error of the mean is
The key point is the interpretation. The value does not mean most students are within points of . That would confuse standard error with standard deviation.
Instead, it means that if you repeatedly took similar random samples of students from the same population, the sample mean would typically vary by about points from sample to sample.
Why the Formula Uses
The in the denominator explains why larger samples give more precise mean estimates. If the sample size grows, the denominator grows, so the standard error gets smaller.
But the change is not linear. To cut the standard error in half, you usually need about four times the sample size, because
Common Standard Error Mistakes
- Using standard error and standard deviation as if they were interchangeable.
- Saying a small standard error means the raw data have little spread. That conclusion does not follow unless you also know the standard deviation is small.
- Forgetting that the formula here is specifically for the sample mean.
- Assuming a bigger sample always fixes bias. A larger reduces random sampling variation, but it does not automatically correct a biased sample.
When Standard Error Is Used
Standard error matters when you want to judge how precise an estimate is. It appears in confidence intervals, hypothesis tests, regression output, and survey results.
In each case, the idea is the same: standard error helps connect one sample to the uncertainty in the estimate that came from that sample.
Try a Similar Problem
Try your own version with and . Compute the standard error of the mean, then compare it with the case where . That is a quick way to see how sample size changes precision. If you want to go further, explore a confidence interval next and see how the standard error affects its width.
Need help with a problem?
Upload your question and get a verified, step-by-step solution in seconds.
Open GPAI Solver →