In statistics, the difference between an estimator's expected value and the true value of the parameter being estimated is called the bias. Statistics is a mathematical science pertaining to the collection analysis interpretation or explanation and presentation of Data. In Statistics, an estimator is a function of the observable sample data that is used to estimate an unknown population Parameter (which is called the An estimator or decision rule having nonzero bias is said to be biased.

Although the term bias sounds pejorative, it is not necessarily used in that way in statistics. Biased estimators may have desirable properties. Not only do they sometimes have a smaller mean squared error than any unbiased estimator, but in some cases the only unbiased estimators are not even within the convex hull of the parameter space, so their use is absurd. In Statistics, the mean squared error or MSE of an Estimator is one of many ways to quantify the amount by which an Estimator differs from the In Mathematics, the convex hull or convex envelope for a set of points X in a Real Vector space V is the minimal Convex

## Definition

Suppose we are trying to estimate the parameter $\theta \$ using an estimator $\widehat{\theta}$ (that is, some function of the observed data). In Statistics, an estimator is a function of the observable sample data that is used to estimate an unknown population Parameter (which is called the Then the bias of $\widehat{\theta}$ is defined to be

$\operatorname{E}(\widehat{\theta})-\theta.\,$

In words, this would be "the expected value of the estimator $\widehat{\theta}$ minus the true value $\theta \$. " This may be rewritten as

$\operatorname{E}(\widehat{\theta}-\theta).\,$

which would read "the expected value of the difference between the estimator and the true value" (the expected value of $\theta \$ is precisely $\theta \$ ).

## Examples

### Estimating variance

Suppose X1, . . . , Xn are independent and identically distributed normal random variables with expectation μ and variance σ2. In Probability theory and Statistics, the variance of a Random variable, Probability distribution, or sample is one measure of Let

$\overline{X}=(X_1+\cdots+X_n)/n$

be the "sample average", and let

$S^2=\frac{1}{n}\sum_{i=1}^n(X_i-\overline{X}\,)^2$

be a "sample variance". We also know that the variance σ2 is defined by:

${}\sigma^2 = \frac 1N \sum_{i=1}^N \left(x_i - \overline{x} \right)^ 2 \,$

where N is the population size and xi represents the member of the whole population.

Then S2 is a "biased estimator" of σ2 because

$\operatorname{E}(S^2)=\frac{n-1}{n}\sigma^2\neq\sigma^2.$

In other words, the sample variance does not equal the population variance, unless multiplied by the normalization factor.

Common sense would suggest to apply the population formula to the sample as well. The reason that it is biased is that the sample mean is generally somewhat closer to the observations in the sample than the population mean is, to these observations. This is so because the sample mean is, by definition, in the middle of the sample, while the population mean may even lie outside the sample. So the deviations to the sample mean will often be smaller than the deviations to the population mean, and so, if the same formula is applied to both, then this variance estimate will on average be somewhat smaller in the sample than in the population.

Note that when a transformation is applied to an unbiased estimator, the result is not necessarily itself an unbiased estimate of its corresponding population statistic. That is, for a non-linear function f and an unbiased estimator U of a parameter p, f(U) is usually not an unbiased estimator of f(p). For example the square root of the unbiased estimator of the population variance is not an unbiased estimator of the population standard deviation. In Mathematics, a square root of a number x is a number r such that r 2 = x, or in words a number r whose In Probability theory and Statistics, the variance of a Random variable, Probability distribution, or sample is one measure of In Probability and Statistics, the standard deviation is a measure of the dispersion of a collection of values

### Estimating a Poisson probability

A far more extreme case of a biased estimator being better than any unbiased estimator is well-known: Suppose X has a Poisson distribution with expectation λ. In Probability theory and Statistics, the Poisson distribution is a Discrete probability distribution that expresses the probability of a number of events It is desired to estimate

$\operatorname{P}(X=0)^2=e^{-2\lambda}.\quad$

(For example, when incoming calls at a telephone switchboard are modeled as a Poisson process, and λ is the average number of calls per minute, then e−2λ is the probability that no calls arrive in the next two minutes. )

Since the expectation of an unbiased estimator δ(X) is equal to the estimand, i. e.

$E(\delta(X))=\sum_{x=0}^\infty \delta(x) \frac{\lambda^x e^{-\lambda}}{x!}=e^{-2\lambda}$,

the only function of the data constituting an unbiased estimator is

$\delta(x)=(-1)^X\quad$.

If the observed value of X is 100, then the estimate is 1, although the true value of the quantity being estimated is obviously very likely to be near 0, which is the opposite extreme. And if X is observed to be 101, then the estimate is even more absurd: it is −1, although the quantity being estimated obviously must be positive.

The (biased) maximum likelihood estimator

$e^{-2X}\quad$

is far better than this unbiased estimator. Maximum likelihood estimation ( MLE) is a popular statistical method used for fitting a mathematical model to some data Not only is its value always positive, but it is also more accurate in the sense that its mean squared error (MSE)

$e^{-4\lambda}-2e^{\lambda(1/e^2-3)}+e^{\lambda(1/e^4-1)}$

is smaller; compare the unbiased estimator's MSE of

1 − e − 4λ. In Statistics, the mean squared error or MSE of an Estimator is one of many ways to quantify the amount by which an Estimator differs from the

The MSEs are a functions of the true value λ. The bias of the maximum-likelihood estimator is:

$e^{-2\lambda}-e^{\lambda(1/e^2-1)}$.

### Maximum of a discrete uniform distribution

The bias of maximum-likelihood estimators can be substantial. Consider a case where n tickets numbered from 1 through to n are placed in a box and one is selected at random, giving a value X. If n is unknown, then the maximum-likelihood estimator of n is X, even though the expectation of X is only (n + 1)/2; we can only be certain that n is at least X and is probably more. In this case, the natural unbiased estimator is 2X − 1.