Let me start by saying my question is not about stack overflows but on way to make it happen, without compile-time errors\warnings. I know (first hand) you can overflow a stack with recursion:
void endlessRecursion()
{
int x = 1;
if(x) endlessRecursion(); //the 'if' is just to hush the compiler
}
My question is, is it possible to overflow the stack by declaring too many local variables. The obvious way is just declare a huge array like so:
void myStackOverflow()
{
char maxedArrSize[0x3FFFFFFF]; // < 1GB, compiler didn't yell
}
In practice even 0xFFFFF bytes causes stack overflow on my machine
So, I was wondering:
Yes, allocating a large amount of memory will cause a stack overflow. It shouldn't matter whether you allocate one large variable or a lot of small ones; the total size is what's relevant.
You can't do a compile-time loop with the preprocessor, but you can implement some shortcuts that let you generate large amounts of code without typing it all. For example:
#define DECLARE1 { int i;
#define END1 }
#define DECLARE2 DECLARE1 DECLARE1
#define END2 END1 END1
#define DECLARE4 DECLARE2 DECLARE2
#define END4 END2 END2
and so on. This puts the multiple int i;
declarations in nested blocks, ensuring that all the objects exist at the same time while avoiding name conflicts. (I couldn't think of a way to give all the variables distinct names.)
DECLARE4 END4
expands to:
{ int i; { int i; { int i; { int i; } } } }
This won't work if your compiler imposes a limit on the length of a line after preprocessing.
The lesson here is that the preprocessor isn't really designed for this kind of thing. It's much easier and more flexible to write a program in your favorite scripting language that generates the declarations. For example, in bash:
for i in {1..100} ; do
echo " int i$i;"
done