Methods of finding of estimations

1. Method of moments. According to the method of moments offered by K. Pearson, the certain amount of the sampling moments (initial either central , or those and others) is equated to corresponding theoretical moments of distribution ( or ) of a random variable of X.

Remind that the sampling moments and are determined by the formulas: and the corresponding them theoretical moments – (for a discrete random variable with the function of probabilities pi = j(xi, q)), (for a continuous random variable with density of probabilities j(x, q)), where a = M(X).

Example. Find estimate of the method of moments for parameter l of the law of Poisson.

Solution: At the given case for finding a unique parameter l it is enough to equate the theoretical n1 and empirical initial moments of the first order. n1 is mathematical expectation of the random variable X. For a random variable distributed by the law of Poisson, M(X) = l. The moment Consequently, the estimate of the method of moments for parameter l of the law of Poisson is sample mean .

Estimates of the method of moments are usually consistent; however on efficiency they are not "best", their efficiencies are often less than 1. Nevertheless, the method of moments is frequently used in practice since it results in rather simple calculations.

2. Method of maximal (the greatest) plausibility. The basic method of obtaining of estimations of parameters of parent population on sample data is the method of maximal plausibility offered by R. Fisher. The basis of the method is made with the function of plausibility expressing density of probability (probability) of joint occurrence of results of sample x1, x2, …, xn:

or

According to the method of maximal plausibility as an estimation of unknown parameter q one accept such a value which maximizes the function L. Naturalness of the similar approach to determination of statistical estimations follows from the sense of function of plausibility which at each fixed value of parameter q is a measure of plausibility of reception of observations x1, x2, …, xn. And the estimate is such that available at us observations x1, x2, …, xn are the most plausible.

Finding of an estimate becomes simpler if to maximize not function L itself, and its logarithm ln L, since the maximum of both functions is reached at the same value q. Therefore for finding of an estimate of parameter q (one or several) it is necessary to solve an equation (a system of equations) of plausibility received by equating of a derivative (partial derivatives) to zero on parameter (parameters) q: and then to select a decision which inverts the function ln L in a maximum.

Example. Find an estimate of method of maximal plausibility for probability p of occurrence of some event A on the given number m of occurrences of the event in n independent trials.

Solution: Compose the function of plausibility:

or

Then ln L = m ln p + (n – m) ln(1 – p) and by the equation (*), we have , and consequently (one can show that for the sufficient condition of extremum of the function L holds). Thus, an estimate of method of maximal plausibility of the probability p of the event A is relative frequency w = m/n of the event.

Example. Find estimates of the method of maximal plausibility for parameters a and s 2 of the normal law of distribution on sample data.

Solution: The density of probability of a normally distributed random variable:

Then the function of plausibility has the following form:

Logarithming it, we obtain:

For finding the parameters a and s 2 one should equate their partial derivatives to zero, i.e. to solve the system of equations of plausibility:

And consequently the estimates of maximal plausibility are equal:

Thus, the estimates of the method of maximal plausibility of mathematical expectation a and dispersion s 2 of a normally distributed random variable are sample mean and sample dispersion s 2.

Importance of the method of maximal plausibility is connected to its optimal properties. So, if for parameter q there is an effective estimate then an estimate of maximal plausibility is unique and equal to . Moreover, under enough general conditions the estimates of maximal plausibility are consistent, asymptotically unbiased, and asymptotically effective and have asymptotically normal distribution.

The basic lack of the method of maximal plausibility is difficulty of calculation of estimates connected to solving the equations of plausibility, more often nonlinear. Essentially as well that for construction of estimations of maximal plausibility and providing their "good" properties the exact knowledge such as the analyzed law of distribution j(x, q) is necessary, that it appears practically unreal in many cases.

3. Method of the least squares. The method is one of the simplest ways of construction of estimations. Its essence consists that the estimation is determined from the condition of minimization of the sum of squares of deviations of sampling data from the determined estimation.

Example. Find an estimation of method of the least squares for a parent mean

Solution: According to the method of the least squares find an estimate from the condition of minimization of sum:

Using the necessary condition of extremum, equate the derivative to zero:

and consequently

i.e. the estimate of the method of the least squares for parent mean is the sample mean

The method of the least squares has received the wide circulation in practice of statistical researches since firstly, it does not demand knowledge of the law of distribution of sampling data; secondly, it is well enough developed by way of computing realization.

Наши рекомендации