next up previous contents
Next: The ``doubly truncated'' case Up: The estimation of the Previous: The Takens estimator   Contents

The Ellner estimator

Now suppose that the scaling behaviour is distorted in the smallest distance region due to noise. Ellner showed that ``unreliable'' distances $r \leq r_{l}$ can still be used for the estimation of the dimension, by ``censoring on the left'' [Ellner, 1988]. It is called ``Type I censoring'' because it takes place at a fixed point (here $r_{l}$) [Kendall and Stuart, 1979, §32.16]. The likelihood function is given by [Kendall and Stuart, 1979, cf. exer. 32.15]:
\begin{displaymath}
L(\nu) \sim {\left[ r_{l}^{\nu} \right]}^{N_{l}} .
\prod_{i=1}^{N_{s}} \left[ \nu r_{i}^{(\nu-1)} \right]
\end{displaymath} (5.15)

where all distances have been divided by $r_{u}$. The likelihood equation is:
\begin{displaymath}
\frac{d \ln L(\nu)} {d \nu} =
N_{l}\ln(r_{l}) + \frac{N_{s}}{\nu} + \sum_{i=1}^{N_{s}} \ln(r_{i}) = 0
\end{displaymath} (5.16)

and the estimator of the dimension:
\begin{displaymath}
\hat{\nu} = \frac{-N_{s}}
{\displaystyle{ \sum_{i=1}^{N_{s}} \ln(r_{i}) +
N_{l}\ln(r_{l})} }
\end{displaymath} (5.17)

with variance ($ N_l + N_s $ is fixed)
\begin{displaymath}
\mbox{VAR}\left\{\hat{\nu}\right\} = \frac{\nu^{2}} {(N_{l}+N_{s})(1-r_{l}^{\nu})}
\end{displaymath} (5.18)

Eqs (5.17) and (5.18) are the same as Ellner derived (see also [W.L. Deemer, Jr. and D.F. Votaw, Jr., 1955], [Kendall and Stuart, 1979, exer. 32.16]).

Now suppose that the scaling region extends down to zero, but that we set $r_l > 0$ anyway. In that case, the variance of the Ellner estimator will be larger than the estimator of Takens; this is caused by the loss of information about observations in $[0,r_l]$.


next up previous contents
Next: The ``doubly truncated'' case Up: The estimation of the Previous: The Takens estimator   Contents
webmaster@rullf2.xs4all.nl