i read that the size of the stack in c in windows is 1 MB , and 8MB in linux , by default . but that size can be changed .
1 - why would i use the heap when i am worried about size limits when i can change the stack to fit all the data ?
2 - what are the disadvantages of changing the limit of the stack and making it bigger ?
1 - why would i use the heap when i am worried about size limits when i can change the stack to fit all the data ?
It's not about size, it's about lifetime. Objects with auto
storage duration (i.e., allocated from the stack in most implementations) only exist for the lifetime of their enclosing scope or function. That matters if you need something to persist across multiple function calls (such as a node in a list or tree).
Objects with allocated
storage duration (i.e., allocated from the heap using malloc
, calloc
, or realloc
) exist until you explicitly deallocate them (free
).
2 - what are the disadvantages of changing the limit of the stack and making it bigger ?
You're making assumptions about what the underlying implementation can support, which can limit your ability to port code to other platforms (which may or may not be a concern for you). You're also trading frame size for stack depth - you'll run out of stack space faster if you set aside more space per function call, which can matter for deeply nested calls or recursive algorithms1.
This is why the usual practice is to either allocate very large objects dynamically, or to make them static
.