On any platform I've worked with, stack size was always bounded. You had to specify maximum stack size at some point (before the program starts), and it was preallocated. Why cannot the stack be a linked list residing in heap memory? Then it would be virtually unlimited. Is it some inherent property of all today's computer architectures?
My question is not related to any specific programming language, or platform. It's pure academic curiosity.
(By 'stack' I mean the memory where threads store execution traces and arguments, if there is any ambiguity.)
Continuously-stored stacks need to have a known max size so multiple of them could be created directly in memory, such stacks are faster to process and easier to implement than ones which are implemented as linked lists . But on Windows and Linux, process/thread stacks are using virtual memory address spaces, so the only general purposes of limiting them are in terms of programming languages or compilers:
In usually used hardware architectures, there is no representation of Stack . It may be possible to use Stack to force that certain data, on certain systems would go into lower-level, or any, CPU Cache , but, well.., its questionable .
On Linux, process can use own virtual address space, so at least 1 of it's Stacks can have no size limit
(so would be limited only by: amount of possible addresses, and stored data)
#include <sys/resource.h>
rlimit *limit = {
.rlim_cur = RLIM_INFINITY, // user-settable limit
.rlim_max = RLIM_INFINITY // system/admin limit
};
int getrlimit(RLIMIT_STACK, limit);
*limit.rlim_cur = RLIM_INFINITY;
int setrlimit(RLIMIT_STACK, limit); // set limit for current process
On Windows there is no option for unlimited Stack space . Maybe because it's processes are using the same virtual address space, for compatibility or performance . Or maybe for enforced safety .