I am using envelope() to create simultaneous critical envelopes. I noticed that when I ran 39 simulations that the envelope I created changed the interpretation for CSR when I repeated the simulation several times. I would expect some variability with the Monte Carlo simulation method but not enough to change the interpretation more than the critical envelope level (here .05). In the following reproducible code below, you can see the envelope and Y axis change for each simulation. In my example, this alters the interpretation for CSR. I am wondering if someone can explain this variability to me.
library(spatstat)
X <- rpoislpp(5, simplenet)
linearK(X)
plot(envelope(X, linearK, global=TRUE, nsim=39))
Short answer: try fix.n=TRUE, nsim=199, nrank=5
.
Long answer:
Firstly a Monte Carlo test is a random procedure, so that repetitions of the procedure can produce different results. In your working example, there are only a few points in the data point pattern, so there will be a lot of variability in the simulations, and high variability between different repetitions of the envelope command.
The variability in the results of envelope
can be reduced by taking a larger number of simulations (e.g. instead of nsim=39
you could take nsim=199, nrank=5
).
Secondly, Monte Carlo tests are strictly speaking invalid if the null hypothesis is composite (i.e. if the null model depends on parameters that have to be estimated from the data). CSR is a composite hypothesis because it depends on the intensity parameter. The envelopes generated in the example are not exactly correct for this problem, and since the variability of the simulations is high, this problem is magnified.
Fortunately, you can produce valid envelopes by setting fix.n=TRUE
. This causes the simulated patterns to have the same number of points as the data pattern. This trick is only valid for CSR. (For other null models, use a two-stage Monte Carlo envelope, see bits.envelope
)