I have following code to allocate a huge amount of data and if it is more than there is memory available (here it is 32GB) it should throw an exception. Using:
bool MyObject::init()
{
char* emergency_memory = new char[32768];
try
{
std::vector<std::vector<MyData> > data_list;
std::vector<MyData> data;
data.resize(1000);
for(size_t i=0; i<1000; i++)
{
data_list.push_back(data);
data_list.push_back(data);
}
}
catch (const std::bad_alloc& e)
{
delete[] emergency_memory;
std::cout << "Data Allocation failed:" << e.what() << std::endl;
return false;
}
return true;
}
The exception is never caught. The application just terminates, or crashes the operating system.
What did I do wrong?
Your new
operator has to get the memory from somewhere. As new
is user space code that has no connection to real memory whatsoever, all that it can do is to ask the kernel via the syscall sbrk()
or the syscall mmap()
for some memory. The kernel will respond by mapping some additional memory pages into your process's virtual address space.
As it happens, any memory pages that the kernel returns to user processes must be zeroed out. If this step were skipped, the kernel might leak sensible data from another application or itself to the userspace process.
Also it happens, that the kernel always has one memory page that contains only zeros. So it can simply fulfill any mmap()
request by simply mapping this one zero page into the new address range. It will mark these mapping as Copy-On-Write, so that whenever your userspace process starts writing to such a page, the kernel will immediately create a copy of the zero page. It is then that the kernel will grope around for another page of memory to back its promises.
You see the problem? The kernel does not need any physical memory up unto the point where your process actually writes to the memory. This is called memory over-committing. Another version of this happens when you fork a process. You think the kernel would immediately copy your memory when you call fork()
? Of course not. It will just do some COW-mappings of the existing memory pages!
(This is an important optimization mechanism: Many mappings that are initiated never need to be backed by additional memory. This is especially important with fork()
: This call is usually immediately followed by an exec()
call, which will immediately tear down the COW-mappings again.)
The downside is, that the kernel never knows how much physical memory it actually needs until it fails to back its own promises. That is why you cannot rely on sbrk()
or mmap()
to return an error when you run out of memory: You don't run out of memory until you write to the mapped memory. No error code return from the syscall means that your new
operator does not know when to throw. So it won't throw.
What happens instead is that the kernel will panic when it realizes that it has run out of memory, and start shooting down processes. That is the job of the aptly named Out-Of-Memory killer. That's just to try to avoid rebooting immediately, and, if the heuristics of the OOM-killer work well, it will actually shoot the right processes. The killed processes won't get so much as a warning, they are simply terminated by a signal. No user-space exception involved, again.
TL;DR: Catching bad_alloc
exceptions on an over-committing kernel is next to useless.