Search code examples
c++memorymemory-managementstorage-duration

Storage duration vs location in C++


Sometimes, I see that there is a mix of concepts between the duration of the storage and where does this occur. That is because sometimes I've seen the following statement:

int i; // This is in the stack!
int* j = new int; // This is in the heap!

But is this really true 100% of the time? Does C++ ensure where the storage takes place? Or, is it decided by the compiler?

Is the location of the storage independent from the duration?

For example, taking those two snippets:

void something()
{
   int i;
   std::cout << "i is " << i << std::endl;
}

vs:

void something()
{
   int* i = new int;
   std::cout << "i is " << i << std::endl;
   delete i;
}

Both are more or less equivalent regarding the lifetime of i, which is created at the begining and deleted at the end of the block, here the compiler could just use the stack (I don't know!), and the opposite could happen too:

void something()
{
   int n[100000000]; // Man this is big
}

vs:

void something()
{
  int* n = new int[100000000];
  delete n;
}

Those two cases should be in the heap to avoid stack-overflow (Or at least is what I've been told so far...), does the compiler that also that into account, besides the storage duration?


Solution

  • Is the location of the storage independent from the duration?

    A0: Duration specifies expected/required behavior.
    A1: The standard does not specify how that is implemented.
    A2: The standard does not even require a heap or stack!

    void something()
    {
       int i;
       std::cout << "i is " << i << std::endl;
    }
    
    void something()
    {
       int* i = new int;
       std::cout << "i is " << i << std::endl;
       delete i;
    }
    

    In the first example you have "automatic" storage duration and the second case is "dynamic" storage duration. The difference is that "automatic" will always be destroyed at the end of scope while the second will only be destroyed if the delete is executed.

    Where the objects are created is not specified by the standard and completely left to the implementing.

    On implementations that use an underlying heap that would be an easy implementation choice for the first example; but not a requirement. The implementation can quite easily call the OS for dynamic memory for the space required for the integer and still behave like the standard defines as long as the code to release the memory is also planted and executed when the object goes out of scope.

    Conversely the easy way to implement the dynamic storage duration (second example) is to allocate memory from the runtime and then release it (assuming your implementation has this ability) when you hit the delete. But this is not a requirement. If the compiler can prove that there are not exceptions and you will always hit the delete then it could just as easily put it on the heap and destroy it normally. NOTE: If the compiler determines that the object is always leaked. It could still put it on the heap and simply not destroy it when it goes out fo scope (that is a perfectly valid implementation).

    The second set of examples adds some complications:

    Code:

    int n[100000000]; // Man this is big
    

    This is indeed very large. Some implementations may not be able to support this on a stack (the stack frame size may be limited by the OS or hardware or compiler).

    A perfectly valid implementation is to dynamically allocate the memory for this and ensure that the memory is released when the object goes out of scope.

    Another implementation is to simply pre-allocate the memory not on the stack but in the bzz (going from memory here. This an assembler zone of an application that stores memory). As long as it implements the expected behavior of calling any destructors at the end of scope (I know int does not have a destructor so it makes that easy).