Suppose that a function from a generic library would like to use long ints (say 64 bit) for any case, but in my program I would like to use short ones (say 32 bit). Then I encounter a situation as follows:
void f(long unsigned int *a) {
*a = 10;
}
void main(void) {
unsigned int b;
f(&b);
return;
}
Do I understand correctly that this, in fact, is not a good idea and that the function f will overwrite 32 more bits (the ones following b
in memory), writing 0
into them (as the compiler casts from unsigned int *
to long unsigned int *
)? If I imagine correctly, the following, however, will not overwrite (as the compiler casts from long unsigned int
to unsigned int
):
long unsigned int f(void) {
return 10;
}
void main(void) {
unsigned int b;
b = f();
return;
}
Is it correct?
This second implementation has a drawback in case it is convenient for me that the function returns some other part of the calculation, and that 10
is just an additional detail of the calculation...
Do I understand correctly that this, in fact, is not a good idea
Yes, that is correct.
... and that the function f will overwrite 32 more bits (the ones following b in memory), writing 0 into them (as the compiler casts from
unsigned int *
tolong unsigned int *
)?
That's one possibility. An unsigned long int*
may also have stricter alignment requirements than an unsigned int*
so it could crash before even getting that far.
What you'd typically do if you have an interface like that is to provide a variable of the correct type and then assign it to a variable of the type you want:
void f(unsigned long int *a) {
*a = 10;
}
void main(void) {
unsigned long int tmp;
f(&tmp);
// if (tmp > UINT_MAX) ... // possible error check
unsigned int b = tmp;
}