next up previous
Next: Monte Carlo simulations Up: mlbmb Previous: The correlation integral

Maximum likelihood methods

For the correlation dimension, maximum likelihood estimators have been derived by Takens (1985) and Ellner (1988). In their approach, the distribution function (3) is normalized, such that $P(r_{u})=1$, with the consequence that the correlation entropy cannot be determined.

Now suppose we have a sample that consists of $N$ independent distances, with $N_{l}$ in the interval $[0,r_{l}]$, $N_{s}$ in $]r_{l},r_{u}]$ and $N_{u}$ in $]r_{u},1]$. The likelihood function for this doubly censored set of data is [Kendall and Stuart, 1979]

\begin{displaymath}
L(\nu,\rho) = A .
{\left[ \rho {\left( \frac{r_{l}}{r_{u}} ...
...right)}^{(\nu-1)} \right] .
{\left[ 1 - \rho \right]}^{N_{u}}
\end{displaymath} (7)

where A is a permutation coefficient and $\rho = \phi r_{u}^{\nu}$ is used for convenience. We now consider the case where the sample consists of $N_{d}$ distances calculated at embedding dimension $d$, and $N_{d+e}$ distances calculated at embedding dimension $d+e$. We also assume that the $N_{d}$ and $N_{d+e}$ distances are independent. The likelihood function for this case, $L(\nu,\rho_{d},\rho_{d+e})$, is the product of the likelihood functions $L_{d}(\nu,\rho_{d})$ and $L_{d+e}(\nu,\rho_{d+e})$.

By solving the likelihood equations [Kendall and Stuart, 1979], we find the maximum likelihood estimators of the parameters $\nu$, $\rho_{d}$ and $\rho_{d+e}$. These are

\begin{displaymath}
\hat{\nu} = - \frac{N_{s,d}+N_{s,d+e}} { \displaystyle{
\sum...
... +
N_{l,d+e} \ln \left( \frac{r_{l,d+e}}{r_{u,d+e}} \right)
}}
\end{displaymath} (8)

and
\begin{displaymath}
\hat{\rho}_{d} = \frac{N_{l,d}+N_{s,d}} {N_{d}}
\end{displaymath} (9)

with a similar expression for $\hat{\rho}_{d+e} $. The maximum likelihood estimator of the correlation entropy is
\begin{displaymath}
\hat{K}_{2} = \frac{ \ln
\left( \frac{\hat{\rho}_{d} r_{u,d...
...at{\rho}_{d+e} r_{u,d}^{\hat{\nu}}} \right) }
{ e l \Delta t}
\end{displaymath} (10)

where we used equation (3) and the property that a function of maximum likelihood estimators is itself a maximum likelihood estimator.

The asymptotic variances of the dimension and entropy estimators are obtained by inverting the information matrix [Kendall and Stuart, 1979]. We find

\begin{displaymath}
\mbox{var}(\hat{\nu}) =
\frac{\nu^{2}}
{N_{d}\rho_{d}\left(...
...t(1-{\left(\frac{r_{l,d+e}}{r_{u,d+e}}\right)}
^{\nu}\right)}
\end{displaymath} (11)

and
\begin{displaymath}
\mbox{var} (\hat{K}_{2}) =
\frac{ \frac{1-\rho_{d}}{N_{d}\rh...
...t(\frac{r_{u,d}}{r_{u,d+e}}\right) }
{ {(e l \Delta t)}^{2} }
\end{displaymath} (12)

If $r_{u,d}=r_{u,d+e}$, then $\hat{\nu}$ and $\hat{K}_{2}$ are uncorrelated. Moreover, the estimator of the entropy is equivalent to equation (6) if one substitutes $r=r_{u,d}=r_{u,d+e}$ and if the correlation integrals are based on independent distances.

The maximum likelihood estimator of the correlation dimension calculated at a single embedding dimension reads

\begin{displaymath}
\hat{\nu_d} = \frac{-N_{s,d}}
{\displaystyle{ \sum_{i=1}^{N_{s,d}} \ln(r_{i}) +
N_{l,d}\ln(r_{l,d})} }
\end{displaymath} (13)

Its asymptotic variance is given by
\begin{displaymath}
\mbox{var}(\hat{\nu_d}) = \frac{\nu_d^{2}}
{N_{d}\rho_{d}\left(1-{\left(\frac{r_{l,d}}{r_{u,d}}\right)}^{\nu}\right)}
\end{displaymath} (14)

These equations are slight generalizations of Ellner's results. Note that the expressions for the ``double'' correlation dimension (equation (8)) and entropy (equation (10)) are only meaningful if the ``single'' correlation dimensions $\hat{\nu}_d$ and $\hat{\nu}_{d+e}$ (equation (13)) do not significantly differ.


next up previous
Next: Monte Carlo simulations Up: mlbmb Previous: The correlation integral
webmaster@rullf2.xs4all.nl