Next: The Ellner estimator
Up: The estimation of the
Previous: The estimation of the
Contents
The estimator Takens derived [Takens, 1985]
is based on the assumption that the scaling
region extends down to the smallest distances
( and ).
Distances are discarded,
so that we have a truncated distribution. Since the distribution
has to be equal to 1 at , it must be divided by
.
For the probability density function we obtain:

(5.4) 
For convenience, we will often omit in writing
fractions of distances. We then note that ``all distances have been
divided by ''.
The likelihood function is given by (cf. [Kendall and Stuart, 1979, eq. (17.16)]):

(5.5) 
The maximum likelihood principle is to take as the estimator
of that value (denoted by ) that maximizes
the likelihood function [Kendall and Stuart, 1979, §18.1].
The algebra simplifies if we first take logarithms:

(5.6) 
The likelihood equation is:

(5.7) 
For the maximum likelihood estimator of the dimension we find:

(5.8) 
This estimator must be biased since
is unbiased [Kendall and Stuart, 1979, §18.14].
The asymptotic variance
(the CramérRao lower bound)
of an estimator t of a parameter
is given by [Kendall and Stuart, 1979, eq. 17.23]:

(5.9) 
where denotes expectation.
For the variance of we find ( is fixed):

(5.10) 
Eqs (5.8) and (5.10)
are the same as Takens derived.
For finite , the bound will not be exactly attained, because
the necessary condition [Kendall and Stuart, 1979, eq. 17.27] is not met.
(This is also the case for all other estimators described in this chapter).
However, given the distribution (5.4), there exists an
unbiased estimator of , which is given by [Hogg and Craig, 1978, p.373]:

(5.11) 
with variance

(5.12) 
Note that the variance asymptotically attains the CramérRao lower
bound (eq. (5.10)).
According to [Hogg and Craig, 1978, p.355], there is no other unbiased estimator
with smaller variance.
In practice, is usually unknown and we have to
substitute the estimated value. This is legitimate
if the number of distances is large enough.
With these results, we can derive the bias and variance of
the Takens estimator for finite :

(5.13) 
and

(5.14) 
Thus, both the variance and the meansquareerror are
larger than those of the unbiased estimator.
Next: The Ellner estimator
Up: The estimation of the
Previous: The estimation of the
Contents
webmaster@rullf2.xs4all.nl