Search code examples
c++cpu-architecturecpu-cachelocalityofreference

All occurrences of temporal and spatial locality of reference in a given code snippet


I have read about spatial and temporal localities.

In a couple of words.

Temporal locality: Programs often access the same memory locations repeatedly.

Spatial locality: Programs also often access adjacent memory locations repeatedly.

Now I'm tying to analyse the following code to find all the occurrences of temporal and spatial locality of reference.

for (int i = 0, j = 10; i < 100; i++)
    a[i] = j++;

I figured out only the followings.

Spatial

  • a[i] = j++; After referencing a[i] we are about to reference a[i+1].
  • The instructions to do all this are stored in next to each other in memory.

Temporal

  • i compared against 100.
  • i incremented by the one: i++.
  • The assignment a[i] = j++ uses i as array index.
  • Array base a used for indexing each a[i].
  • j incremented by the one in assignment a[i] = j++.

So am I correct in all of these and did I miss something else?


Solution

  • Definitions are the other way around.

    Spatial locality - locality in space, accessing nearby memory locations.

    Temporal locality - locality in time, accessing the same location several times.

    The use of spatial locality in designing caches is such that, when your program needs a[i], cache just doesn't fetch a[i] from memory. It fetches few entries a[i], a[i+1], a[i+2] ... as much as the size of a cacheline. By doing so, cache expect you are likely to access a[i+1] soon. And when you do so, cache does not need to fetch it from memory, its already there thanks to the use of spatial locality.

    Temporal locality comes when you access i and j. Every time you need i and j, no need to fetch it from memory, as it is cached. By caching, i and j, cache thought you are gonna access it again, so saves time to fetch it from memory.

    And to answer the exact point, yes you have identified places for spatial and temporal locality. Only thing that is not correct is, your statement, "The instructions to do all this are stored in next to each other in memory". There wont be separate instructions to access a[i], a[i+1]. It will be a single instruction used inside a loop, using memory address calculated each time. As I explained above, the advantage is that the cache does not need to fetch the data for the next iteration as it is likely to fetched as part of an early iteration in the loop.