Search code examples
c#redisin-memory-databasencache

Is it possible to keep objects in memory for an even faster cache store than Redis or Memcached?


In-Memory cache stores require serializing/deserializing when storing complex objects, such as a C# POCO object.

Is it not possible to just keep the data to be cached in memory as the object graph, and eliminate this bottleneck? Afterall, the cached and serialized data is still in memory so why not keep the original objects in the memory for the fastest cache possible ( and maybe use Named pipes to implement a distributed cache?)

Thank you


Solution

  • Waescher is right that if there's a single machine and single process, you could store the object graph in local memory. If there are multiple processes, then it's got to be in shared memory and that opens up a whole can of worms that may or may not be addressed by 3rd party products such as Redis/memcached. For example, now concurrency has to be managed (i.e. make sure that one process isn't trying to read the graph at the same time another is modifying the graph, or a more ambitious lock-free algorithm). In fact, this also has to be addressed in a single multi-threaded process.

    Object references, in the shared memory case, if they're memory pointers, might still be usable as long as the shared memory segment is mapped to the same address in each process. Depending on the size of the shared memory segment, and the size of the processes and each process' memory map, this may or may not be possible. Using a system-generated object identifier/reference (e.g. a sequentially increasing 4- or 8-byte integer) would obviate that problem.

    At the end of the day, if you store the object graph in any repository, it has to be serialized/deserialized into/out of that repository's storage.