Next: The equalprobabilities method of
Up: MAXIMUM LIKELIHOOD METHODS
Previous: Generalizations
Contents
Throughout the previous sections we assumed that the finite number
of distances is the only source of error.
In practice, the distribution of distances within the (identified)
scaling region will usually not exactly be given by eq. (5.1).
We therefore have to test whether this distribution fits to the
data or not. Other sources of error, e.g. lacunarity (nonconstant
), may
distort the scaling behaviour and the estimators and their variances
derived in the previous sections may no longer be valid.
Furthermore, this test is particularly interesting if we lack prior
knowledge about the attractor, in particular its dimension.
Therefore, we will first estimate and and then test
whether the chosen scaling region deserves its name.
Note however, that passing such a test does only mean that it
is reasonable to say that the scaling region is straight, not
that the slope is the true dimension.
Let
be independent
calculations on a random variable (distances) with distribution function P(r)
which has a known form, but some unknown parameters.
We now wish to test
the hypothesis

(5.58) 
Any test of (5.58) is called a test of fit [Kendall and Stuart, 1979, §30.2].
The hypothesis is called simple
if the distribution is completely specified. In our
case, we have to estimate the parameters of the distribution, so
that our hypothesis is composite [Kendall and Stuart, 1979, §23.2].
The probability that we reject the hypothesis when it is true,
is called the size of the test [Kendall and Stuart, 1979, §22.6].
A `` test'' can be devised to determine a value of
such that a test of any size
would
not reject . In other words: we calculate
(as discussed below) and we reject the hypothesis
if
.
We will use the conventional value of
.
Now suppose that the range of the variate is divided into
mutually exclusive classes. The probability of an observation
falling in each class is denoted by and the observed
frequency by with
.
If we assume that the are multinomially distributed, then

(5.59) 
has an (approximate [Hogg and Craig, 1978, p.271])
distribution with degrees
of freedom [Kendall and Stuart, 1979, §30.5].
However, we estimate the using maximum likelihood estimators
so that the distribution of is bounded between a
and a
variable where is the
number of estimated parameters [Kendall and Stuart, 1979, §30.19].
For large enough, this poses no serious problem.
The value of we choose to use is now given by the probability
associated with the upper tail of the
distribution [Kendall and Stuart, 1979, §30.6]. This will be
smaller than the one we would obtain with the distribution,
so we will sooner reject .
Subsections
Next: The equalprobabilities method of
Up: MAXIMUM LIKELIHOOD METHODS
Previous: Generalizations
Contents
webmaster@rullf2.xs4all.nl