next up previous contents
Next: The Ellner estimator Up: The estimation of the Previous: The estimation of the   Contents

The Takens estimator

The estimator Takens derived [Takens, 1985] is based on the assumption that the scaling region extends down to the smallest distances ($r_{l} = 0$ and $N_{l} = 0$). Distances $r > r_{u}$ are discarded, so that we have a truncated distribution. Since the distribution has to be equal to 1 at $r_u$, it must be divided by $\tilde{\phi} \exp (-d \tau K_{2}) r_u^{\nu}$. For the probability density function we obtain:
\begin{displaymath}
p(r) = \nu {\left(\frac{r}{r_u}\right)}^{\nu -1} \cdot
\frac{1}{r_u}
\end{displaymath} (5.4)

For convenience, we will often omit $r_u$ in writing fractions of distances. We then note that ``all distances have been divided by $r_u$''. The likelihood function is given by (cf. [Kendall and Stuart, 1979, eq. (17.16)]):
\begin{displaymath}
L(\nu) \sim \prod_{i=1}^{N_{s}} \left[ \nu r_i^{(\nu-1)} \right]
\end{displaymath} (5.5)

The maximum likelihood principle is to take as the estimator of $\nu$ that value (denoted by $\hat{\nu}$) that maximizes the likelihood function [Kendall and Stuart, 1979, §18.1]. The algebra simplifies if we first take logarithms:
\begin{displaymath}
\ln L(\nu) \sim N_{s}\ln(\nu) + (\nu-1) \sum_{i=1}^{N_{s}} \ln(r_i)
\end{displaymath} (5.6)

The likelihood equation is:
\begin{displaymath}
\frac{d \ln L(\nu)} {d \nu} = \frac{N_{S}}{\nu} +
\sum_{i=1}^{N_{s}} \ln(r_{i}) = 0
\end{displaymath} (5.7)

For the maximum likelihood estimator of the dimension we find:
\begin{displaymath}
\hat{\nu} = \frac{-N_{s}}
{\displaystyle{\sum_{i=1}^{N_{s}} \ln(r_{i})}}
\end{displaymath} (5.8)

This estimator must be biased since $\hat{\left( \frac{1}{\nu} \right)}$ is unbiased [Kendall and Stuart, 1979, §18.14]. The asymptotic variance (the Cramér-Rao lower bound) of an estimator t of a parameter $\theta$ is given by [Kendall and Stuart, 1979, eq. 17.23]:
\begin{displaymath}
\mbox{VAR}\left\{t\right\} = -1 /
E \left( \frac {d^{2} \ln L(\theta)} {d \theta^{2}} \right)
\end{displaymath} (5.9)

where $E$ denotes expectation. For the variance of $\hat{\nu}$ we find ($N_{s}$ is fixed):
\begin{displaymath}
\mbox{VAR}\left\{\hat{\nu}\right\} = \frac{\nu^{2}}{N_{s}}
\end{displaymath} (5.10)

Eqs (5.8) and (5.10) are the same as Takens derived. For finite $N_{s}$, the bound will not be exactly attained, because the necessary condition [Kendall and Stuart, 1979, eq. 17.27] is not met. (This is also the case for all other estimators described in this chapter).

However, given the distribution (5.4), there exists an unbiased estimator of $\nu$, which is given by [Hogg and Craig, 1978, p.373]:

\begin{displaymath}
\hat{\nu}_{\mbox{u}} = - \frac{N_{s}-1}
{\displaystyle{\sum_{i=1}^{N_{s}} \ln(r_{i})}}
\end{displaymath} (5.11)

with variance
\begin{displaymath}
\mbox{VAR}\left\{\hat{\nu}_{\mbox{u}}\right\} = \frac{\nu^{2}}{N_{s}-2}
\end{displaymath} (5.12)

Note that the variance asymptotically attains the Cramér-Rao lower bound (eq. (5.10)). According to [Hogg and Craig, 1978, p.355], there is no other unbiased estimator with smaller variance. In practice, $\nu$ is usually unknown and we have to substitute the estimated value. This is legitimate if the number of distances is large enough.

With these results, we can derive the bias and variance of the Takens estimator for finite $N_{s}$:

\begin{displaymath}
\mbox{BIAS}\left\{\hat{\nu}\right\} =
\mbox{E}\left\{\hat{\n...
...
\nu\left(\frac{N_{s}}{N_{s}-1}-1\right) = \frac{\nu}{N_{s}-1}
\end{displaymath} (5.13)

and
\begin{displaymath}
\mbox{VAR}\left\{\hat{\nu}\right\} =
\mbox{VAR}\left\{\frac{...
...left(\frac{N_{s}}{N_{s}-1}\right)}^{2}
\frac{\nu^{2}}{N_{s}-2}
\end{displaymath} (5.14)

Thus, both the variance and the mean-square-error are larger than those of the unbiased estimator.


next up previous contents
Next: The Ellner estimator Up: The estimation of the Previous: The estimation of the   Contents
webmaster@rullf2.xs4all.nl