next up previous contents
Next: The variance of the Up: The identification of strange Previous: The derivatives of the   Contents

Information from outside the scaling region

It is assumed that distances $r$ on an attractor in phase space obey the cumulative distribution function (see Chapter 5)
\begin{displaymath}
P(r) = \phi r^{\nu}
\end{displaymath} (B.1)

Following Ellner [Ellner, 1988], we also assume that the correlation integral $C(r) \approx P(r)$, and that the distribution does only hold for distances inside the scaling region. The scaling region is a straight part in a log-log plot of $C(r)$ vs. $r$; that is why the cumulative distribution should have the form
\begin{displaymath}
\ln P(r) = \ln \phi + \nu \ln(r)\;\;(\mbox{so that}\;\;P(r) = \phi r^{\nu})
\end{displaymath} (B.2)

However, it seems more logical to assume that the probability density function does only hold for distances inside the scaling region $]r_l,r_u]$, since the cumulative distribution depends on the density for distances in $[0,r_l]$. So suppose that distances in $[0,r_l]$ have some unknown distribution and that the probability of finding a distance in that region is $A$. Furthermore, assume that distances in $]r_l,r_u]$ are distributed according to the probability density function
\begin{displaymath}
\frac{d P(r)}{ dr } = p(r) = \phi \nu r^{\nu-1}
\end{displaymath} (B.3)

Hence, in the scaling region, the cumulative distribution function should have the form
\begin{displaymath}
P(r) = A + \int_{r_l}^r p(r) dr = A + \phi ( r^{\nu} - r_l^{\nu} )
\end{displaymath} (B.4)

If we discard distances in $]r_u,1]$, we should have $P(r_u) = 1$. Therefore, after solving for $\phi$ and dividing all distances by $r_u$, we obtain
\begin{displaymath}
P(r) = A + (1-A) . \frac{ r^{\nu}-r_l^{\nu} }
{ 1 - r_l^{\nu} }
\end{displaymath} (B.5)

The likelihood function for this set of data is:
\begin{displaymath}
L(\nu) \sim {\left[ A \right]}^{N_{l}} .
\prod_{i=1}^{N_{s}}...
...t[ (1-A) .
\frac{ \nu r_{i}^{(\nu-1)} }{ 1-r_l^{\nu} } \right]
\end{displaymath} (B.6)

The maximum likelihood estimator, obtained from this likelihood function is exactly the same as from the ``doubly truncated'' case (see section 5.2.3). Note that the $N_l$ distances do not give any information about $\nu$.

However, the reason that Ellner's estimator has smaller variance than the ``doubly truncated'' one is related to the fact that we are looking for a (straight) scaling region. We then have to assume that $A = r_l^{\nu}$; however, $A$ is unknown! A log-log plot will not show a straight line if $A \neq r_l^{\nu}$. This means that if we indeed see a sufficiently straight scaling region, the chosen $r_l$ may be larger than the one for which eq. (B.3) holds. Any errors caused by a deviation of $A$ from $r_l^{\nu}$ have then died out. In that case, the $N_l$ distances do indeed contain information about $\nu$, so that it will be better to use the Ellner estimator instead of the ``doubly truncated'' case. The alternative is to find another way of identifying the region for which eq. (B.3) holds. However, the purpose of this appendix is only to clarify the contribution of the $N_l$ distances to the estimation of the dimension.


next up previous contents
Next: The variance of the Up: The identification of strange Previous: The derivatives of the   Contents
webmaster@rullf2.xs4all.nl