My question isn't one of correctness, so much as efficiency. I am currently attempting to verify a function in C which relies on a typedef struct
containing an array and an int
. However, I have noticed that the array is being initialized in clightgen
with the normalize option, in a fairly inefficient way and I was wondering if there is a way to mitigate this inefficiency.
I've made a small (and a bit silly) toy example to showcase the behavior in question.
#define MAX_N 100
typedef struct {
int n;
int a[MAX_N];
} N_Array;
I then have the following function:
void copy (const N_Array* x, N_Array* y) {
int i;
N_Array ret = {x->n}; //This seems to be where the problematic behavior arises
for (i=0; i<x->n; i++) {
ret.a[i] = x->a[i];
}
y->n = ret.n;
for (i=0; i<ret.n; i++) {
y->a[i] = ret.a[i];
}
}
The line in which the n
field of ret
is assigned appears to trigger a 100 line long initialization of the a
field, which while conceptually easy to deal with (100 or so forward
s) is nearly impossible to actually get through due to the time it takes and only gets worse as the value of MAX_N
becomes larger.
I hope this isn't too esoteric a question to ask, but is there either a "better" version of forward
I can use for this situation, or an easy way to circumvent this initialization behavior?
The problem is that the local-variable initialization N_Array ret = {x->n};
really does cause the C compiler to emit 100 assignment statements. Best to write the program as,
N_Array ret;
ret.n = x->n;