Search code examples
c#memorymemory-managementdynamic-memory-allocation

MemoryMappedFiles: How much memory can be allocated for files


I'm having large CT rawdata files which can exceed the size of 20 to 30GB maximum. For most of our current computers in the department we have only 3GB maximum. But for processing the data we need to go through all the available data. Of course we could do this by sequentially going through the data via the read and write functions. But it's sometimes necessary to keep some data in memory.

Currently I have my own memory management where I created a so called MappableObject. Each rawdatafile contains, say 20000 structs each showing different data. Each MappableObject refers to a location in the file.

In C# I created a somewhat partially working mechanism which automatically mps and unmaps the data if necessary. From years ago I know the MemoryMappedFiles, but in .NET 3.5 I refused to use it because I knew in .NET 4.0 it will be available natively.

So today I have tried the MemoryMappedFiles and found out that it is not possible to allocate as much memory a need. If I have a 32bit system, and I want to allocate 20GB it doesn't work due to exceeding the size of the logical address space. This is somehow clear to me.

But is there a way to process such large files as I have? What other chances do I have? How do you guys solve such things?

Thanks Martin


Solution

  • "Memory mapped", you can't map 20 gigabytes into a 2 gigabyte virtual address space. Getting 500 MB on a 32-bit operating system is tricky. Beware that it is not a good solution unless you need heavy random access to the file data. Which ought to be difficult when you have to partition the views. Sequential access through a regular file is unbeatable with very modest memory usage. Also beware the cost of marshaling the data from the MMF, you're still paying for the copy of the managed struct or the marshaling cost.