Search code examples
netlogo

NetLogo Burnin / Warmup issue


Using NetLogo 5.3.1, i'm trying to set up BehaviorSpace so that all its model runs start after exactly the same 500-tick warmup period. However, the results are not intuitive to me.

For illustrative purposes, I will use the 'Flocking.nlogo' model in the model library. Below is the model code, with 2 lines of code added to the end of the setup which saves the model's state after 500 ticks.

turtles-own [
  flockmates         ;; agentset of nearby turtles
  nearest-neighbor   ;; closest one of our flockmates
]

to setup
  clear-all
  create-turtles population
    [ set color yellow - 2 + random 7  ;; random shades look nice
      set size 1.5  ;; easier to see
      setxy random-xcor random-ycor
      set flockmates no-turtles ]
  reset-ticks

  ; Now execute a 500-tick warm-up period and save the model's state
  repeat 500 [ go ]
  export-world "Flocking-after-500ticks.csv"
  
end

to go
  ask turtles [ flock ]
  ;; the following line is used to make the turtles
  ;; animate more smoothly.
  repeat 5 [ ask turtles [ fd 0.2 ] display ]
  ;; for greater efficiency, at the expense of smooth
  ;; animation, substitute the following line instead:
  ;;   ask turtles [ fd 1 ]
  tick
end

to flock  ;; turtle procedure
  find-flockmates
  if any? flockmates
    [ find-nearest-neighbor
      ifelse distance nearest-neighbor < minimum-separation
        [ separate ]
        [ align
          cohere ] ]
end

to find-flockmates  ;; turtle procedure
  set flockmates other turtles in-radius vision
end

to find-nearest-neighbor ;; turtle procedure
  set nearest-neighbor min-one-of flockmates [distance myself]
end

;;; SEPARATE

to separate  ;; turtle procedure
  turn-away ([heading] of nearest-neighbor) max-separate-turn
end

;;; ALIGN

to align  ;; turtle procedure
  turn-towards average-flockmate-heading max-align-turn
end

to-report average-flockmate-heading  ;; turtle procedure
  ;; We can't just average the heading variables here.
  ;; For example, the average of 1 and 359 should be 0,
  ;; not 180.  So we have to use trigonometry.
  let x-component sum [dx] of flockmates
  let y-component sum [dy] of flockmates
  ifelse x-component = 0 and y-component = 0
    [ report heading ]
    [ report atan x-component y-component ]
end

;;; COHERE

to cohere  ;; turtle procedure
  turn-towards average-heading-towards-flockmates max-cohere-turn
end

to-report average-heading-towards-flockmates  ;; turtle procedure
  ;; "towards myself" gives us the heading from the other turtle
  ;; to me, but we want the heading from me to the other turtle,
  ;; so we add 180
  let x-component mean [sin (towards myself + 180)] of flockmates
  let y-component mean [cos (towards myself + 180)] of flockmates
  ifelse x-component = 0 and y-component = 0
    [ report heading ]
    [ report atan x-component y-component ]
end

;;; HELPER PROCEDURES

to turn-towards [new-heading max-turn]  ;; turtle procedure
  turn-at-most (subtract-headings new-heading heading) max-turn
end

to turn-away [new-heading max-turn]  ;; turtle procedure
  turn-at-most (subtract-headings heading new-heading) max-turn
end

;; turn right by "turn" degrees (or left if "turn" is negative),
;; but never turn more than "max-turn" degrees
to turn-at-most [turn max-turn]  ;; turtle procedure
  ifelse abs turn > max-turn
    [ ifelse turn > 0
        [ rt max-turn ]
        [ lt max-turn ] ]
    [ rt turn ]
end


; Copyright 1998 Uri Wilensky.
; See Info tab for full copyright and license.

The BehaviorSpace window looks like this:

enter image description here

The added 2 lines of code, which saves the model's state after 500 ticks, come from the answer to question 6 in Chapter 9 in Railsback & Grimm 2012: Agent-based and individual-based modeling (1st edition). The answer continues by stating the next step: "Then, in BehaviorSpace, change the "Setup commands" to just import the saved world and run 1000 more ticks".

I did this, and then imported the file into R to summarise the data by calculating the mean and SD of number of flockmates at tick 100, 200, 300, 400, and 500. Below the R code:

df <- read.csv("ibm_table_output-test.csv", skip = 6)

    df1 <- df %>%
      rename(run_number = X.run.number.,
             time_step = X.step.,
             mean_flockmates = mean..count.flockmates..of.turtles
      ) %>%
      select(run_number,
             time_step,
             mean_flockmates,
             vision) %>%
      arrange(run_number,
              time_step) %>%
      filter(time_step == 100 | 
               time_step == 200 |
               time_step == 300 |
               time_step == 400 |
               time_step == 500)
    
    df1_long <- melt(df1,                                      # Apply melt function
                     id.vars = c("run_number", "time_step","vision"))
    
    # Calculate a summary table
    df1.summ <- df1_long %>%
      group_by(time_step, vision) %>%
      summarise(avg = mean(value),
                sd = sd(value))

The output is as follows:

 # A tibble: 15 × 4
# Groups:   time_step [5]
   time_step vision   avg    sd
       <int>  <int> <dbl> <dbl>
 1       100      1  8.34     0
 2       100      2  8.34     0
 3       100      3  8.34     0
 4       200      1  7.83     0
 5       200      2  7.83     0
 6       200      3  7.83     0
 7       300      1  7.95     0
 8       300      2  7.95     0
 9       300      3  7.95     0
10       400      1  7.45     0
11       400      2  7.45     0
12       400      3  7.45     0
13       500      1  7.92     0
14       500      2  7.92     0
15       500      3  7.92     0

To me this output doesn't make sense.

My question is why is the average number of flockmates the same across different vision levels within the same time_step group? And why are the SDs all 0? In other words, why do the model runs produce identical outputs? I thought that initiating a burnin period would initiate identical starting positions for all simulations, but create different mean and SD values for each run because of different random numbers used? Or am I misunderstanding?


EDIT: The reason why the SDs are 0 is because there is no variation in mean values, but I don't understand why there is no variation. Below is the df1_long data frame:

   run_number time_step vision        variable    value
1           1       100      1 mean_flockmates 8.340000
2           1       200      1 mean_flockmates 7.833333
3           1       300      1 mean_flockmates 7.953333
4           1       400      1 mean_flockmates 7.446667
5           1       500      1 mean_flockmates 7.920000
6           2       100      1 mean_flockmates 8.340000
7           2       200      1 mean_flockmates 7.833333
8           2       300      1 mean_flockmates 7.953333
9           2       400      1 mean_flockmates 7.446667
10          2       500      1 mean_flockmates 7.920000
11          3       100      2 mean_flockmates 8.340000
12          3       200      2 mean_flockmates 7.833333
13          3       300      2 mean_flockmates 7.953333
14          3       400      2 mean_flockmates 7.446667
15          3       500      2 mean_flockmates 7.920000
16          4       100      2 mean_flockmates 8.340000
17          4       200      2 mean_flockmates 7.833333
18          4       300      2 mean_flockmates 7.953333
19          4       400      2 mean_flockmates 7.446667
20          4       500      2 mean_flockmates 7.920000
21          5       100      3 mean_flockmates 8.340000
22          5       200      3 mean_flockmates 7.833333
23          5       300      3 mean_flockmates 7.953333
24          5       400      3 mean_flockmates 7.446667
25          5       500      3 mean_flockmates 7.920000
26          6       100      3 mean_flockmates 8.340000
27          6       200      3 mean_flockmates 7.833333
28          6       300      3 mean_flockmates 7.953333
29          6       400      3 mean_flockmates 7.446667
30          6       500      3 mean_flockmates 7.920000

Solution

  • My understanding is that you're running setup once, manually, and then running your BehaviorSpace experiment. The problem you will have with that is that the random number generator seed is including in the export-world data that you generate once by running the setup procedure. Then when you call import-world in the Setup commands: of each experiment run you will get that RNG seed imported as well. The export actually includes the full state of the RNG, but thinking of it as being the same seed is close enough.

    LeirsW is correct that Flocking (as most NetLogo models, and probably the original one you had the problem with) is totally deterministic. So the outcome will be the same with the same RNG seed each time.

    The fix is easy, add a second line to your BehaviorSpace experiment Setup commands: after the import-world that runs random-seed new-seed. This will make sure each model run has a new, unique RNG seed to use for the rest of its run.