I have the following code that is throwing a std::bad_alloc
exception:
std::vector<std::vector<double>> myVector(nlines);
for(int i = 0; i < nlines; i++)
{
try
{
std::vector<double> iVector(ncolumns);
myVector[i] = iVector;
}
catch (std::exception& e)
{
/* catches a bad_alloc here */
}
}
}
This code seems to work when nlines
is about 500,000 (ncolumns
will usually be less than 10) but when I tried this on a full sized data set where nlines
= 2,600,000 I get the bad_alloc exception.
I have 12 GB of memory and looking at my memory useage when running the program it goes from 28% (before starting) up to 42% (when the exception is thrown). So it looks like I still have memory available.
I found this post which says that vectors allocate their memory on the heap. According to this post, which links to this MSDN page I can set the amount of heap (in bytes) that my code can use. Initially the Heap Commit Size and Heap Reserve Size were blank. When I put in values of 2000000000 (2 GB) I still get the same problem.
To make things a little more interesting, this C++ code (not CLI) is being called using an interop from a C#.NET application. The modifications to the Heap Commit Size and Heap Reserve Size were set on the C++ project. I don't know if I also need to set these on the .NET projects or how I would do this.
Any advice or help would be appreciated.
As Neil Kirk points out, a 32-bit process is limited to 2GB of memory as stated in this MSDN page. This is true for unmanned and managed applications.
There are many SO questions about this, for example I have found these:
How much memory can a 32 bit process access on a 64 bit operating system?
The maximum amount of memory any single process on Windows can address
In my case I think that the interop between the .NET and the unmanaged code is doing some buffering and so is eating up the available memory. Ideally I should only have two or three 2D vectors of 2,600,000 x 10 elements (if a double is 8-bytes then this is still less than 1 GB). I will need to investigate this further.