next up previous contents
Next: The Neerven estimator Up: The estimation of the Previous: The ``doubly truncated'' case   Contents

The ``doubly censored'' case

For the entropy estimation (in section 5.3), it is necessary to estimate $\phi$ and, consequently, to use information from distances $r > r_u$. The likelihood function for the doubly censored set of data is given by (cf. [Kendall and Stuart, 1979, eq. 32.37]):
\begin{displaymath}
L(\nu,\phi) \sim {\left[ \phi r_{l}^{\nu} \right]}^{N_{l}} ....
...u-1)} \right] .
{\left[ 1 - \phi r_{u}^{\nu} \right]}^{N_{u}}
\end{displaymath} (5.22)

The likelihood equations are:
\begin{displaymath}
\frac{ \partial \ln L(\nu,\phi) } {\partial \phi} =
\frac{N...
...N_{s}}{\phi} - \frac{N_{u}r_{u}^{\nu}}{1-\phi r_{u}^{\nu}} = 0
\end{displaymath} (5.23)

and
\begin{displaymath}
\frac{ \partial \ln L(\nu,\phi) } {\partial \nu} =
N_{l}\ln...
...c{N_{u}\phi r_{u}^{\nu} \ln(r_{u})} {1 - \phi r_{u}^{\nu}} = 0
\end{displaymath} (5.24)

so that
\begin{displaymath}
\hat{\phi} = \frac{N_{l}+N_{s}}{Nr_{u}^{\hat{\nu}}}
\end{displaymath} (5.25)

Note that $\hat{\phi}$ depends on the reference distance. Substitution of eq. (5.23) into (5.24) and dividing all distances by $r_{u}$ yields:
\begin{displaymath}
\hat{\nu} = \frac{-N_{s}}
{\displaystyle{ \sum_{i=1}^{N_{s}} \ln(r_{i}) +
N_{l}\ln(r_{l})} }
\end{displaymath} (5.26)

which is very similar to Ellner's result. The difference is that the total number of distances ($N$) is fixed here instead of $ N_l + N_s $. (That is why we had to use eq. (5.23) as well). The asymptotic variance of maximum likelihood estimators $t$ of a function $T$ of $k$ parameters $\theta$ (here $\nu$ and $\phi$) is given by [Kendall and Stuart, 1979, eq. 17.87]:
\begin{displaymath}
\mbox{VAR}\left\{ t \right\} = \sum_{j=1}^{k} \sum_{l=1}^{k}...
..._{j}} .
\frac{\partial T}{\partial {\theta}_{l}} . I_{jl}^{-1}
\end{displaymath} (5.27)

where $I$ is the information matrix
\begin{displaymath}
I = \left(
\begin{array}{ll}
I_{\nu\nu} & I_{\nu\phi} \\
I_{\phi\nu} & I_{\phi\phi} \\
\end{array}\right)
\end{displaymath} (5.28)

Like eq. (5.9), eq. (5.27) takes account only of terms $1/N$ in the variance [Kendall and Stuart, 1979, §17.24]. Solving eq. (5.27) for $\hat{\nu}$ yields:
\begin{displaymath}
\mbox{VAR}\left\{\hat{\nu}\right\} = \frac{I_{\phi\phi}}{\mb...
...
\frac{I_{\phi\phi}}{I_{\nu\nu}I_{\phi\phi} - I^{2}_{\nu\phi}}
\end{displaymath} (5.29)

The elements of the information matrix are obtained from the second partial derivatives of the likelihood function:
\begin{displaymath}
\frac{\partial^{2} \ln L(\nu,\phi)}{\partial \nu^{2}} =
-\fr...
...} \ln^{2}(r_{u})}
{{\left( 1 - \phi r_{u}^{\nu} \right)}^{2}}
\end{displaymath} (5.30)


\begin{displaymath}
\frac{\partial^{2} \ln L(\nu,\phi)}{\partial \phi^{2}} =
-\f...
...{N_{u}r_{u}^{2\nu}}{{\left( 1 - \phi r_{u}^{\nu} \right)}^{2}}
\end{displaymath} (5.31)


\begin{displaymath}
\frac{\partial^{2} \ln L(\nu,\phi)}{\partial\nu\partial\phi}...
...{2\nu}\ln(r_{u})}
{{\left( 1 - \phi r_{u}^{\nu} \right)}^{2}}
\end{displaymath} (5.32)

Thus
\begin{displaymath}
I_{\nu\nu} = -\mbox{E}\left\{
\frac{\partial^{2} \ln L(\nu,\...
...ac{N\phi^{2} r_{u}^{2\nu}\ln^{2}(r_{u})}{1 - \phi r_{u}^{\nu}}
\end{displaymath} (5.33)


\begin{displaymath}
I_{\phi\phi} = - \mbox{E} \left\{
\frac{\partial^{2} \ln L(\...
..._{u}^{\nu}}{\phi} +
\frac{Nr_{u}^{2\nu}}{1 - \phi r_{u}^{\nu}}
\end{displaymath} (5.34)


\begin{displaymath}
I_{\nu\phi} = I_{\phi\nu} = - \mbox{E} \left\{
\frac{\partia...
...}) +
\frac{N\phi r_{u}^{2\nu}\ln(r_{u})}{1 - \phi r_{u}^{\nu}}
\end{displaymath} (5.35)

since $\mbox{E}\left\{N_{s}\right\} = N\phi(r_{u}^{\nu}-r_{l}^{\nu})$, $\mbox{E}\left\{N_{l}+N_{s}\right\} = N\phi r_{u}^{\nu}$ and $\mbox{E}\left\{N_{u}\right\} = N(1-\phi r_{u}^{\nu})$. Note that
\begin{displaymath}
I_{\nu\nu} = \frac{N\phi(r_{u}^{\nu}-r_{l}^{\nu})}{\nu^{2}} ...
...\mbox{ and }
I_{\phi\phi} = \frac{I_{\nu\phi}}{\phi\ln(r_{u})}
\end{displaymath} (5.36)

Now
$\displaystyle \mbox{VAR}\left\{\hat{\nu}\right\}$ $\textstyle =$ $\displaystyle \frac{1}{I_{\nu\nu} -
\frac{I_{\nu\phi}^{2}}{I_{\phi\phi}}} =
\fr...
...{l}^{\nu})}{\nu^{2}} +
I_{\nu\phi}\phi\ln(r_{u}) - I_{\nu\phi}\phi\ln(r_{u})} =$  
    $\displaystyle \frac{\nu^{2}}{N\phi(r_{u}^{\nu}-r_{l}^{\nu})}$ (5.37)

If we substitute the estimated values $\hat{\nu}$ and $\hat{\phi}$ we obtain:
\begin{displaymath}
\mbox{VAR}\left\{\hat{\nu}\right\} = \frac{\hat{\nu}^{2}}
{\left(N_{l}+N_{s}\right)\left(1 - r_{l}^{\hat{\nu}}\right)}
\end{displaymath} (5.38)

which is once again similar to Ellner's result. Thus, asymptotically (because $N$ is fixed instead of $ N_l + N_s $), the ``doubly censored'' estimator of the dimension is the same as Ellner's.


next up previous contents
Next: The Neerven estimator Up: The estimation of the Previous: The ``doubly truncated'' case   Contents
webmaster@rullf2.xs4all.nl