Search code examples
rdesolve

Moderately large rate of change yields negative state


I'm working on Susceptible-Infected-Recovered model, that is defined on a meta-population, and relates cells pairwise through a distance-based scoring. While doing that, my state goes below zero, (S). And I don't understand why.

Turns out that this can happen because somehow there is no "constrain" on the change. Is this me or is there a solution for this?

I've tried to create a smaller, more direct example. This one uses random values, but it is just to show that the rate is fine when it is relatively small, but then if it changes, due to many nearby cells being infected (here due to random number), then the estimation fails.

library(deSolve)
ode(
  y = c(
    S1 = 10,
    I1 = 0.001),
  times = 0:5000,
  parms = list(# beta = 0.05
    beta = 0.01),
  func = function(times, state, parms) {
    with(parms, {
      # S <- state[c(1, 3)]
      # I <- state[c(2, 4)]
      S <- state[1]
      I <- state[2]

      beta <- beta + 2 * rbinom(1, size = 1, prob = 0.4)

      list(c(dS = -beta * S * I,
             dI = beta * S * I))
    })
  },

  jacfunc = function(t, y, p) {
    with(p, {
      S <- y[1]
      I <- y[2]
      (matrix(
        c(beta * I, beta * S, -beta * I, -beta * S),
        nrow = 2,
        ncol = 2,
        byrow = TRUE
      ))
    })
  },
  jactype = "fullusr",
  verbose = TRUE
) %>%
  # diagnostics() %>%
  print() %>%
  # zapsmall()
  identity()
  • Specifying hmin does nothing to salvage this
  • I've also tried providing a jacobian, but this doesn't help anything either.

Solution

  • Diagnosis

    The code contained two issues:

    1. The model function needs to be strictly deterministic and must not contain random components, because the solvers cannot converge otherwise. In an ode system, time is continuous by definition, so a random component internally does not make sense. Technically, a solver will then draw different random values for each iteration step, so the model is essentially undefined. If the intention is to provide a variable parameter beta it has to be defined beforehand with well-defined time steps.
    2. The code line beta + 2 * rbinom(1, size = 1, prob = 0.4) leads to a value of beta switching between 0.01 and 2.01. For this type of model a rate of 2.01 is unrealistically large, not only "moderately". Technically, it runs through and gives a plausible figure with S immediately dropping down to zero. However, as the solvers have a limited precision, it can reach (small) negative values..

    To get a plausible figure, one can multiply beta directly with beta_t or multiply only the random part with a value << 2.

    Solution

    Implement the random component as a forcing

    Store the random component in an input vector (e.g. beta_r) with the same length as times. The time argument in the model function contains always the current time of the simulation. It is intended to access time varying inputs. Because vectors in R start with index 1, use floor(time) + 1. The input vector (often called forcing), can be passed to func as a global variable or as an additional named argument.

    To simplify the example, I omitted the Jacobian.

    library(deSolve)
    
    func <- function(time, state, parms, beta_r) {
      with(parms, {
        S <- state[1]
        I <- state[2]
    
        #beta <- beta + 2 * beta_r[floor(time) + 1]
        beta <- beta * 2 * beta_r[floor(time) + 1]
    
        list(c(dS = - beta * S * I,
               dI =   beta * S * I))
      })
    }
    
    y     <- c(S = 10, I = 0.001)
    times <- 0:500
    parms <- list(beta = 0.01)
    beta_r <- rbinom(length(times), size = 1, prob = 0.4)
    
    ## one simulation
    out <- ode(y, times, func, parms, beta_r = beta_r)
    plot(out)
    

    A series of simulations

    To run a series of simulations, one can use a loop or lapply. The deSolve::plot-function allows to plot such a list directly. A pipelined approach is also possible.

    
    ## a function that runs the model with a new random number series
    run_with_new_beta <- function() {
      beta_r <- rbinom(length(times), size = 1, prob = 0.4)
      ode(y, times, func, parms, beta_r = beta_r)
    }
    
    ## run and plot 10 replicates
    out_L <- lapply(1:10, \(.) run_with_new_beta())
    plot(out, out_L)
    

    10 stochastic runs

    Tune solver parameters for large gradients

    As said earlier, beta = 2.0 is very large for such a system. However, if this is really intended, the time steps can be decreased, either externally with times or internally with hmax. It is then of course getting very slow.

    func <- function(time, state, parms) {
      with(parms, {
        S <- state[1]
        I <- state[2]
    
        list(c(dS = - beta * S * I,
               dI =   beta * S * I))
      })
    }
    
    y     <- c(S = 10, I = 0.001)
    times <- 
    parms <- list(beta = 2)
    
    ## run the interesting short period at the beginning with a small time step
    out <- ode(y, times = seq(0, 1, length.out = 100), func, parms)
    plot(out)
    dplyr::last(out)
    
    ## run a longer persiod with a very small internal time step (hmax)
    out <- ode(y, times = seq(0, 100), func, parms, hmax=0.001)
    summary(out)
    

    Simulation with small time step