next up previous contents
Next: The ``doubly censored'' case Up: The estimation of the Previous: The Ellner estimator   Contents

The ``doubly truncated'' case

Intuitively, one might think that distances $r < r_l$ should be discarded just like distances $r > r_{u}$ (however, see the discussion in appendix B). In that case we have a ``doubly truncated'' data set and the distribution is normalized with a factor $\phi(r_{u}^{\nu} - r_{l}^{\nu})$. For the logarithm of the likelihood function we find: (cf. eq. (5.6))
\begin{displaymath}
\ln L(\nu) = N_{s}\ln(\nu) + (\nu-1) \sum_{i=1}^{N_{s}} \ln(r_{i}) -
N_{s} \ln(1 - r_{l}^{\nu})
\end{displaymath} (5.19)

where all distances have been divided by $r_{u}$. Solving the likelihood equation yields:
\begin{displaymath}
\hat{\nu} = \frac{-N_{s}}
{\displaystyle{ \sum_{i=1}^{N_{s}}...
..._{i}) +
\frac{N_{s}r_{l}^{\nu}\ln(r_{l})} {1-r_{l}^{\nu}} } }
\end{displaymath} (5.20)

The right hand side depends on $\nu$. Note that $r_{l}^{\nu}$ can be estimated by $\frac{N_{l}}{N_{l}+N_{s}}$ (binomial distribution). If we substitute this estimate, the equation becomes equivalent to Ellner's. However, this substitution is not justified from a statistical point of view (in fact, one uses information from distances $r \leq r_{l}$ !). The dimension can be estimated by numerical maximization of the (log-) likelihood function (or by iteration on eq. (5.20) [Kendall and Stuart, 1979, §32.17A]). The variance is given by ($N_s$ is fixed):
\begin{displaymath}
\mbox{VAR}\left\{\hat{\nu}\right\} = {\left[ N_{s} \left( \f...
... {\left( 1 - r_{l}^{\nu} \right)}^{2} }
\right) \right]}^{-1}
\end{displaymath} (5.21)

which is larger than (5.18), except when $r_l \rightarrow 0$, when the Takens, Ellner and ``doubly truncated'' estimators and their variances become the same.


next up previous contents
Next: The ``doubly censored'' case Up: The estimation of the Previous: The Ellner estimator   Contents
webmaster@rullf2.xs4all.nl