In my application I need to continuously write data chunks (around 2MB) about every 50ms in a large file (around 2-7 GB). This is done in a sequential, circular way, so I write chunk after chunk into the file and when I'm at the end of the file I start again at the beginning.
Currently I'm doing it as follows:
In C# I call File.OpenWrite
once to open the file with read access and set the size of the file with SetLength
. When I need to write a chunk, I pass the safe file handle to the unmanaged WriteFile
(kernel32.dll). Hereby I pass an overlapped structure to specify the position within the file where the chunk has to be written. The chunk I need to write is stored in unmanaged memory, so I have an IntPtr
which I can pass to WriteFile
.
Now I'd like to know if and how I can make this process more efficient. Any ideas?
Some questions in detail:
Using better hardware will probably be the most cost efficient way to increase file writing efficiency. There is a paper from Microsoft research that will answer most of your questions: Sequential File Programming Patterns and Performance with .NET and the downloadable source code (C#) if you want to run the tests from the paper on your machine.
In short:
This thread on social.msdn might also be of interest.