next up previous contents
Next: Generalizations Up: MAXIMUM LIKELIHOOD METHODS Previous: The Neerven estimator   Contents

The estimation of the entropy

The correlation entropy $K_2$ can be defined by [Eckmann and Ruelle, 1985] (see Chapter 4):
\begin{displaymath}
K_2 = - \lim_{r \rightarrow 0} \lim_{d \rightarrow \infty}
\frac{1}{d \tau} \lim_{N \rightarrow \infty}
\ln C_d(r,N)
\end{displaymath} (5.39)

In section 5.2.4 we derived an estimator for
\begin{displaymath}
\phi = \tilde{\phi} \exp (-d \tau K_{2})
\end{displaymath} (5.40)

using the likelihood function for a doubly censored set of data:
\begin{displaymath}
L(\nu,\phi) \sim {\left[ \phi r_{l}^{\nu} \right]}^{N_{l}} ....
...u-1)} \right] .
{\left[ 1 - \phi r_{u}^{\nu} \right]}^{N_{u}}
\end{displaymath} (5.41)

To simplify the algebra (cf. appendix C), we write the likelihood function as:
$\displaystyle L(\nu,\rho)$ $\textstyle \sim$ $\displaystyle {\left[ \rho {\left( \frac{r_{l}}{r_{u}} \right)}^{\nu} \right]}^...
...{r_{i}}{r_{u}} \right)}^{(\nu-1)} \right] .
{\left[ 1 - \rho \right]}^{N_{u}} =$  
    $\displaystyle \rho^{N_{l}+N_{s}} . {\left[1-\rho\right]}^{N_{u}} .
{\left[ {\le...
...^{N_{s}} \left[ \nu r_{u}
{\left( \frac{r_{i}}{r_{u}} \right)}^{\nu -1} \right]$ (5.42)

where $\rho = \phi r_{u}^{\nu}$. The logarithm of this likelihood function is:
$\displaystyle \ln L(\nu,\rho)$ $\textstyle =$ $\displaystyle (N_{l}+N_{s})\ln(\rho) + N_{u}\ln(1-\rho) +
N_{l}\nu\ln\left(\frac{r_{l}}{r_{u}}\right) + N_{s}\ln(\nu) +$  
    $\displaystyle (\nu -1) \sum_{i=1}^{N_{s}} \ln\left(\frac{r_{i}}{r_{u}}\right) +
N_{s}\ln(r_{u})$ (5.43)

To estimate the entropy, we have to estimate $\phi$ for at least two embedding dimensions, since $\tilde{\phi}$ is unknown. Therefore, we now consider the case where the sample consists of $N_{d}$ distances calculated at embedding dimension $d$ and $N_{d+e}$ distances calculated at embedding dimension $d+e$. We assume that the $N_{d}$ and $N_{d+e}$ distances are independent. The likelihood function for this case, $L(\nu,\rho_{d},\rho_{d+e})$, is the product of the likelihood functions $L_{d}(\nu,\rho_{d})$ and $L_{d+e}(\nu,\rho_{d+e})$. The likelihood equation for $\nu$ is:
$\displaystyle \frac{\partial\ln L(\nu,\rho_{d},\rho_{d+e})}{\partial\nu}$ $\textstyle =$ $\displaystyle N_{l,d}\ln\left(\frac{r_{l,d}}{r_{u,d}}\right) + \frac{N_{s,d}}{\nu} +
\sum_{i=1}^{N_{s,d}}\ln\left(\frac{r_{i}}{r_{u,d}}\right) +$  
    $\displaystyle N_{l,d+e}\ln\left(\frac{r_{l,d+e}}{r_{u,d+e}}\right) + \frac{N_{s,d+e}}{\nu} +
\sum_{j=1}^{N_{s,d+e}}\ln\left(\frac{r_{j}}{r_{u,d+e}}\right) = 0$ (5.44)

The likelihood equations for $\rho_d$ and $\rho_{d+e}$ are:
\begin{displaymath}
\frac{\partial\ln L(\nu,\rho_{d},\rho_{d+e})}{\partial\rho_{...
...ac{N_{l,d}+N_{s,d}}{\rho_{d}} - \frac{N_{u,d}}{1-\rho_{d}} = 0
\end{displaymath} (5.45)

and
\begin{displaymath}
\frac{\partial\ln L(\nu,\rho_{d+e},\rho_{d+e})}{\partial\rho...
...e}+N_{s,d+e}}{\rho_{d+e}} - \frac{N_{u,d+e}}{1-\rho_{d+e}} = 0
\end{displaymath} (5.46)

For the estimators of the parameters $\nu$, $\rho_{d}$ and $\rho_{d+e}$ we find:
\begin{displaymath}
\hat{\nu} = - \frac{N_{s,d}+N_{s,d+e}} { \displaystyle{
\sum...
... +
N_{l,d+e} \ln \left( \frac{r_{l,d+e}}{r_{u,d+e}} \right)
}}
\end{displaymath} (5.47)

and
\begin{displaymath}
\hat{\rho}_{d} = \frac{N_{l,d}+N_{s,d}}{N_{d}}
\end{displaymath} (5.48)

and a similar expression for $\hat{\rho}_{d+e}$. The estimator of the entropy is now given by:
\begin{displaymath}
\hat{K}_{2} = \frac{1}{e\tau}
\ln \left( \frac{ \hat{\rho}_{...
...^{\hat{\nu}} }
{ \hat{\rho}_{d+e} r_{d}^{\hat{\nu}} } \right)
\end{displaymath} (5.49)

To obtain the variances of the dimension and entropy estimators, we have to invert the information matrix
\begin{displaymath}
I = \left(
\begin{array}{lll}
I_{\nu\nu} & I_{\nu\rho_{d}} &...
...d+e}\rho_{d}} & I_{\rho_{d+e}\rho_{d+e}}\\
\end{array}\right)
\end{displaymath} (5.50)

where
$\displaystyle I_{\nu\nu}$ $\textstyle =$ $\displaystyle -\mbox{E}\left\{ -\frac{N_{s,d}}{\nu^{2}}
-\frac{N_{s,d+e}}{\nu^{2}} \right\} =$  
    $\displaystyle \frac{N_{d}\rho_{d}\left(1-{\left(\frac{r_{l,d}}{r_{u,d}}\right)}...
...d+e}\left(1-{\left(\frac{r_{l,d+e}}{r_{u,d+e}}\right)}
^{\nu}\right)} {\nu^{2}}$ (5.51)

and
\begin{displaymath}
I_{\rho_{d}\rho_{d}} = -\mbox{E}\left\{
-\frac{N_{l}+N_{s}}{...
...c{N_{u}}{{(1-\rho)}^{2}} \right\} =
\frac{N_{d}}{\rho(1-\rho)}
\end{displaymath} (5.52)

and a similar expression for $I_{\rho_{d+e}\rho_{d+e}}$; all other elements are zero due to our choice of parameters. For the variances of $\hat{\nu}$ and $\hat{K}_{2}$ we find:
\begin{displaymath}
\mbox{VAR}\left\{\hat{\nu}\right\} = \frac{1}{I_{\nu\nu}} =
...
...t(1-{\left(\frac{r_{l,d+e}}{r_{u,d+e}}\right)}
^{\nu}\right)}
\end{displaymath} (5.53)

Using eq. (5.27) with
\begin{displaymath}
K_{2} = T(\nu,\rho_{d},\rho_{d+e}) = \frac{1}{e\tau}
\left[ ...
...d+e}) + \nu\ln\left(
\frac{r_{u,d+e}}{r_{u,d}}\right) \right]
\end{displaymath} (5.54)

we obtain an expression for the variance of the ML entropy estimator:
$\displaystyle \mbox{VAR}\left\{\hat{K}_{2}\right\}$ $\textstyle =$ $\displaystyle {\left( \frac{\partial T}{\partial \nu} \right)}^{2} I_{\nu\nu}^{...
... \frac{\partial T}{\partial \rho_{d+e}} \right)}^{2} I_{\rho_{d+e}\rho_{d+e}} =$  
    $\displaystyle \frac{ \frac{1-\rho_{d}}{N_{d}\rho_{d}} +
\frac{1-\rho_{d+e}}{N_{...
...{\nu}\right\}
\ln^{2}\left(\frac{r_{u,d}}{r_{u,d+e}}\right) }
{{(e\tau)}^{2}} =$  
    $\displaystyle \frac{\frac{1-\phi_{d}r_{u,d}^{\nu}}{N_{d}\phi_{d}r_{u,d}^{\nu}} ...
...\hat{\nu}\right\}
\ln^{2}\left(\frac{r_{u,d}}{r_{u,d+e}}\right) }
{(e\tau)^{2}}$ (5.55)

If we substitute the estimated values $\hat{\nu}$, $\hat{\rho}_{d}$ and $\hat{\rho}_{d+e}$ we obtain:
\begin{displaymath}
\mbox{VAR} \left\{\hat{\nu}\right\} = \frac{ {\hat{\nu}}^{2}...
...{\nu}}) +
(N_{l,d+e}+N_{s,d+e}) ( 1 - r_{l,d+e}^{\hat{\nu}})
}
\end{displaymath} (5.56)

and
\begin{displaymath}
\mbox{VAR} \left\{\hat{K}_{2}\right\} =
\frac{ \frac{1}{N_{...
... \left( \frac{r_{u,d}}{r_{u,d+e}} \right) }
{ {(e\tau)}^{2} }
\end{displaymath} (5.57)

It is gratifying that this derivation and the one in appendix C are consistent and that both the estimators and their variances do not depend on the reference distance. They should not, because the dimension and entropy are invariant under smooth coordinate transformations [Ott et al., 1984]. In section 5.6 the validity of the expressions for the variances is supported by Monte Carlo simulations.

It can readily be verified that the covariance between $\hat{\nu}$ and $\hat{K}_{2}$ is zero if $r_{u,d}=r_{u,d+e}$. Finally, the estimator of the entropy (eq. (5.49)) is equivalent to the one of the previous chapter (eq. (4.7), with $q=2$) if we set $r=r_{u,d}=r_{u,d+e}$ and if the correlation integrals are based on independent distances. Correlations between distances are discussed in section 5.6.2.


next up previous contents
Next: Generalizations Up: MAXIMUM LIKELIHOOD METHODS Previous: The Neerven estimator   Contents
webmaster@rullf2.xs4all.nl