Some options are:
The dimensions, entropies, their variances and confidence intervals are determined using the derived expresssions. The sums that appear in the expressions for the dimension estimators are computed by summing the values of the array for the elements that belong to the scaling region. For the linear least squares method for estimating the dimension, NAG routine G02CAF is used. The (log-)likelihood function for the ``doubly truncated'' case is maximized using NAG routine E04BBF (which requires the derivative of the likelihood function with respect to ). The upper tail of the distribution is computed using NAG routine G01BCF.
The specified parameter values for the MLDK2 program are printed on the plot of the correlation integrals. Two of them have not been defined before: "Number of detectors" and "Seed". The number of detectors is in fact the number of observables that have been measured and stored in the data file. It is possible to use more than one observable for the reconstruction of phase space [Eckmann and Ruelle, 1985]. The value of seed specifies the initial seed value for the random generator used in other versions of MLDK2. Knowledge of the seed is important when one wants to repeat an experiment and that exactly the same vector indices should be chosen.
On the second page the dimension estimates (from the Ellner estimator) and the entropy estimators are plotted together with confidence intervals as a function of the embedding dimension. On the third page the dimension estimates from the linear least squares (D2l), Takens (D2T), ``doubly truncated'' (D2tr), Ellner (D2E, here the same as ``doubly censored'') and from two embedding dimensions (D2k) estimators are tabulated, together with their standard errors and test values. Furthermore, some information about the number of distances within and outside the scaling region is printed. There denotes the number of bins within the scaling region.
The class boundaries currently used for the Takens case are given by
Note: In MLDK2 all distances are divided by the square root of the
embedding dimension when using the Euclidean norm. This is done
so that the estimated maximum distance on the attractor (if there
is one) is always less than or equals 1.
This means that the reference distance depends on the embedding
For example, the entropy estimator will be biased by a factor
(see equation (C.12), where
we assumed that and have been
divided by the same reference distance)