    Next: The Neerven estimator Up: The estimation of the Previous: The doubly truncated'' case   Contents

## The doubly censored'' case

For the entropy estimation (in section 5.3), it is necessary to estimate and, consequently, to use information from distances . The likelihood function for the doubly censored set of data is given by (cf. [Kendall and Stuart, 1979, eq. 32.37]): (5.22)

The likelihood equations are: (5.23)

and (5.24)

so that (5.25)

Note that depends on the reference distance. Substitution of eq. (5.23) into (5.24) and dividing all distances by yields: (5.26)

which is very similar to Ellner's result. The difference is that the total number of distances ( ) is fixed here instead of . (That is why we had to use eq. (5.23) as well). The asymptotic variance of maximum likelihood estimators of a function of parameters (here and ) is given by [Kendall and Stuart, 1979, eq. 17.87]: (5.27)

where is the information matrix (5.28)

Like eq. (5.9), eq. (5.27) takes account only of terms in the variance [Kendall and Stuart, 1979, §17.24]. Solving eq. (5.27) for yields: (5.29)

The elements of the information matrix are obtained from the second partial derivatives of the likelihood function: (5.30) (5.31) (5.32)

Thus (5.33) (5.34) (5.35)

since , and . Note that (5.36)

Now    (5.37)

If we substitute the estimated values and we obtain: (5.38)

which is once again similar to Ellner's result. Thus, asymptotically (because is fixed instead of ), the doubly censored'' estimator of the dimension is the same as Ellner's.    Next: The Neerven estimator Up: The estimation of the Previous: The doubly truncated'' case   Contents
webmaster@rullf2.xs4all.nl